How can critical and analytical thinking be improved so that they mimic real-life research and prepare students for university courses? The data sets obtained in students’ experiments were used to encourage students to evaluate results, experiments, and published information critically. Examples show that students can learn to compare and defend their experimental results, thus bringing them closer to real research and critical-thinking skills.

Most schools today aim to promote critical thinking as part of their science education, in order to have the students acquire intellectual habits that will lead to later success (Noddings, 2008). During a talk at Boğaziçi University, Istanbul, in 2007, J. Gilbert of Reading University stated that “the school curriculum is the garbage bin for old science. We need to bring real science and school science closer together.” Although he was referring to curricular content, this could just as well be applied to the scientific method and project discussions. A report issued by the National Research Council (Singer et al., 2005) found that although many lab activities were done as part of science courses, many did not reflect the process of scientific inquiry. They felt that the activities did not help students understand how scientists really used lab investigations to explore new areas. Klionsky (2004) affirms that retaining facts is not as important as learning concepts, or “learning how to think and solve problems.” He elucidates this further by stating that “usually the facts are not as important as learning how…[to] develop the ability to ask questions, to find relevant information, and to use that information to solve a problem – in other words, developing critical thinking skills.”

Having graded many lab reports and project discussions, the feeling of disappointment remained each time that a list of possible errors made during the experiment was presented, especially when expected results had not been obtained. Students unquestioningly accepted results from other sources and rejected their own if they did not match the textbook, Internet, or teacher’s facts. This also occurred during projects, which were chosen and executed individually, on many diverse topics.

In real scientific research, results are verified and then supported with data from other sources or used to refute previous data. The question thus arose of whether it was possible to encourage students to do the same in their lab reports. This would entail doing the experiments “to the best of their abilities” and then finding research and reasons to support their results, regardless of whether they agreed with expected results.

Biology is about life, in which one of the constants is change. A different environment, habitat, species, or specimen can (and often does) make a difference in the results obtained. For example, the research results on tortoise physiology in Europe can differ from those of similar species in North America. As climate, vegetation, habitat, and food sources differ, tortoise physiology could (and probably does) differ in turn. Results thus need to be accepted, explained, justified, and not just written off as experimental errors.

A list of errors may be valid for science experiments, especially physics and chemistry, but I felt that biology labs could promote a deeper understanding of the reality of scientific research. If students learned about errors and improvements during other science courses, could they not learn something different in Biology? Because many students seem unable to confidently compare their results to those of others, analyze their results critically, or defend and justify their results, I made an attempt to entice them to think and write more like real scientists.

Teaching Students to Trust & Defend Their Results

To investigate whether students could get closer to “real scientific research thinking and writing,” two biology classes in 2007–2008 were given a different strategy to use when evaluating their results. Their lab reports were then compared to those of the classes in 2006–2007. This strategy was based on repeatedly emphasizing the following four concepts.

1. Errors and improvements should not address mere excuses.

Most science students can explain their data if expected results are obtained. However, if the results deviate from the expected, meaning that the hypothesis would be rejected, students just assume that there were errors in their procedure. The discussion then contains a long list of possible errors, such as “The equipment was old”; “if we had measured more accurately…”; “the leaf was torn in one place”; “the solution was not concentrated enough”; and “the light was placed too close….”

During the 2007–2008 year, students were told that a list of excuses was not acceptable as errors. They should think carefully before submitting the “errors and improvement” sections of their discussions.

2. Unexpected results lead to new discoveries.

The 2007–2008 students were then taught that all results need to be given due consideration. If unexpected results were rejected and explained away merely as experimental errors, new discoveries would take longer to emerge. Examples and worksheets were used to illustrate critical thinking, and the students watched the first 40 minutes of the NOVA Online video “The Cancer Warrior” (, which discusses actual cancer research and gives insight into scientific work and discoveries.

Students were thus encouraged to realize that biology experiments are not clear-cut recipes – they need to be adapted and modified to suit the situation – and that astounding and unusual occurrences can lead to breakthroughs in research. They saw that scientists have the opportunity to repeat their experiments and so confirm results if they suspect procedural errors. However, they would accept unexpected yet verified results and then explain and substantiate them to attain new knowledge and discoveries.

3. Accept and defend your results.

After the examples and video, students were given a new motto for their lab reports. Although the students didn’t have the opportunity to repeat experiments unless working on a project, the concept of “Trust your results and justify them” became a regular phrase and was heard before, during, and after each lab. The effect of this phrase can be seen in the examples below.

Example Lab Reports

As part of their practical work, the 2006–2007 students were asked to design an experiment in which they needed to test the effect of one variable (temperature, pH, substrate concentration, or enzyme concentration) on the breakdown of hydrogen peroxide by catalase. They then did the experiment and wrote a lab report. They were given the grading rubric for the report before starting their experiment (Table 1: Enzyme Lab).

Table 1.

Grading rubrics for the enzyme and pepsin lab discussions.

Discussion CriteriaPoints for Enzyme LabPoints for Pepsin Lab
Explanation of results  
Comparison with other results and discussion of comparison 
Errors and improvements  
Further questions and hypotheses  
Correctly referenced within the text 
Extent of research  
Trust/acceptance of results – justification and explanation of results  
Total points for discussion 6 3 
Total points for lab report 20 15 
Discussion CriteriaPoints for Enzyme LabPoints for Pepsin Lab
Explanation of results  
Comparison with other results and discussion of comparison 
Errors and improvements  
Further questions and hypotheses  
Correctly referenced within the text 
Extent of research  
Trust/acceptance of results – justification and explanation of results  
Total points for discussion 6 3 
Total points for lab report 20 15 

Here are a few excerpts from the lab discussions:

  • “This difference shows that we have errors in our experiment.”

  • “This proves that the experiment is accurate, and we did not have many errors.”

  • “Some random errors were involved in our experiment, resulting in our values being not quite so accurate. We couldn’t accurately tell exactly when the bubbling finished. We had accuracy errors while measuring the amount of hydrogen peroxide, we should have used a pipette instead of a measuring cylinder….”

In one of the 2007–2008 practical sessions, the students were told to test the effect of different pH levels on the digestion of protein by pepsin. They made set combinations of distilled water, HCl, NaHCO3, and pepsin to obtain solutions with different pH values. They left the protein (albumin) in the solutions overnight and recorded their results 24 hours later. They then wrote a full lab report, using the grading rubric given (Table 1: Pepsin Lab) and the new motto.

The point for “errors and improvements” was deliberately omitted in the pepsin lab (Table 1) so that students would not be encouraged to emphasize it. No point was given for “extent of research,” either, given that a point had already been awarded for comparison of results and they had been asked to find additional sources containing data rather than just information (such as a textbook). Only three points were awarded for the pepsin lab discussion (20%) because this was going to be a very big learning experience for them and they shouldn’t be penalized unnecessarily.

These excerpts from discussions show that some students understood the new concept:

  • “[T]here occurred a difference between the digestion of protein in those tubes; this clearly shows us that HCl plays a more important role in the digestion of proteins compared to pepsin…. Although this source [previously cited] highlights that HCl itself has no effect in protein digestion directly, but can only convert pepsinogen to pepsin and denature food; our results prove [to] us that HCl has a more significant effect in the breakdown.”

  • “Scientists who do plenty of research on the topic [are] divided into two about the effect of antacids…. [According to] this source [previously cited] there cannot be any sort of digestion after antacids are taken. However, we did not really observe such a result in our experiment. Our results show that adding antacid doesn’t stop digestion but it decreases the amount of protein which is digested.”

4. Compare your data set to those of others critically and analytically.

The third concept emphasized to the 2007–2008 students was the necessity to critically compare their data to those available in the literature, while still accepting and defending their data, just as scientists do. Some examples from before and after the emphasis of critical comparisons are shown below.

Final Projects

In their final year of biology, students do an independent final project. Each decides on a topic, does background research, and then designs and carries out a control experiment. Detailed instructions and a precise rubric are handed out at the beginning of the second semester so that all 17 assessment criteria are fully understood.

Of the 108 final projects received in 2005–2006 and 2006–2007, only one had a critical and analytical comparison of the data (<1%). Students were good at including other data but would merely state any differences in the results as errors in their own experimental procedure or equipment.

However, the discussion section of the 2007–2008 rubric was adjusted to encourage students to use their critical and analytical thinking skills. A separate section under the heading “Critical Analysis: Comparison with other results or information and critical evaluation thereof” was added. The year of “accept your data and defend it” together with the emphasis on critical thinking in rubrics resulted in five comparative discussions out of 40 final projects (12.5%). Although these were not very critical, they showed that students put a lot more thought into their discussions.

The most basic comparison was this: “However, the experiment concludes that acetylsalicylic acid produces stronger, tolerant, but slowly germinating seeds…. [A]cetylsalicylic acid is obviously thought to have a positive effect on germination. The results of [my] experiment do not favour this.” Unfortunately, the student did not go on directly to explain why there were differences in the results of the two experiments, leaving the reader to make the analysis.

Another student pointed out why sourced data differed from their data but again did not go into detail: “Although R. Sekular claimed that he tested the short-term memory of his subjects and successfully measured them, he missed a really crucial point. He just did one type of test and focused on its variations. That’s why his chance of making mistakes was much higher than mine.”

The statement that “This study is the most inspiring among the others because the article was published in PubMed and the scientists clearly indicate and trust their results (Hirosawa)” shows that this student considered the reliability of the various sources she used. Another student opted to accept a less reliable resource because the data agreed with his data but only after mentioning other, more reliable data that did not support his findings: “And as my experiments say that the poles of the permanent magnets have an effect on the growth of the plants, it agrees with the amateur curious scientists (like U. J. Pittmaan and Albert R. Davis) who – though nonscientifically – say that the poles of the permanent magnet do have an undetermined effect on the growth of the plants.”

Lastly, a student analyzed the data from a source, read the explanation of those data, and then disagreed with the explanation given: “I draw a different conclusion from their results [Dijk et al., Komada et al.] which coincide with my results. They say that the brightness of the [computer] display has no effect on sleep latency. Yet, there is a difference in the BD [bright display] and DD [dark display] of [the] game. When exposed to the dark display, the subjects have shorter sleep latency than those exposed to the bright display. These results [their published data], although interpreted differently by the scientists, agree with my results that light increases sleep latency….”


The use of the “trust your results and defend them” strategy over one year had the desired effect on a small number of students. Students were encouraged to use critical thinking when analyzing their results and find ways to use acceptable literature to explain them, even while rejecting their hypothesis. De-emphasizing errors and improvements reduced the number of “excuses” given for unexpected results.

However, accepting and defending all data also results in students coming to incorrect conclusions (e.g., that HCl plays a greater role in the digestion of protein than pepsin). Thus, the question is whether content or critical-thinking skill is more important for high school learning. Do we teach students to pass school exams and SATs, or do we prepare them more for university thinking? Is critical-thinking ability of much use if exam and SAT scores are lower?

Mansilla and Gardner (2008) claimed that content is less important than learning to evaluate the content critically, especially in an age in which information, reliable as well as unreliable, is freely and easily available to everyone from many different sources, including the Internet. Students will have to learn to critically evaluate all the data and information as well as its source.

Today, the information revolution and the ubiquity of search engines have rendered having information much less valuable than knowing how to think with information in novel situations. To thrive in contemporary societies, young people must develop the capacity to think like experts. (Mansilla & Gardner, 2008)

The Swedish theorists Marton and Säaljö (1976) first realized the difference between deep learning and memorizing or surface learning in the 1970s. In deep learning, understanding concepts and principles is the key that makes thinking about arguments and evidence (i.e., critical thinking) possible. Such learning “produces a sustained and substantial influence on the way [students] think, act and feel” (Bain, 2004, p. 17). Once a student is capable of thinking critically and evaluating information, content can be questioned and examined for flaws, thus eliminating misconceptions.

Evidence from as far back as 1985 (Bain, 2004, pp. 22–24) shows that students learn to memorize formulae easily and can then solve problems using the “plug and chug” method in university physics and math courses. The same experiments by Halloun and Hestenes (1985, as described in Bain, 2004) also demonstrate that it is difficult to get students to change preconceived ideas from high school, even after a year at university. It is thus important to teach critical-thinking skills in high school for students to apply in later life.

As Bain (2004, p. 88) states, exceptional teachers want their students to “take and defend a position” on a topic or issue in class discussions, projects, or papers, allowing them to express their views in a safe learning environment. Having students “accept their results and defend them” allows them to try out critical evaluation of their data as well as available sources of information in the relative safety of high school. Biology, in particular, lends itself easily to the acceptance of different results for similar experiments, depending on the species or environment used, with relatively easily substantiated explanations.


The aim of these teaching strategies was to improve students’ learning and bring their scientific thinking closer to that of “real scientists.” The changes in the thinking and writing of five of the 2007–2008 students were encouraging, showing that they felt confident to apply critical thinking and could come up with reasons for their results. After only 1 year of exposure to the new strategy, these students managed to “accept and defend” their results, showing critical-thinking and evaluation skills to various degrees.


Bain, K. (2004). What the Best College Teachers Do. Cambridge, MA: Harvard University Press.
Halloun, I.A. & Hestenes, D. (1985). Common-sense concepts about motion. American Journal of Physics, 53, 1056–1065.
Klionsky, D.J. (2004). Talking biology: learning outside the book – and the lecture. Cell Biology Education, 3, 204–211.
Mansilla, V.B. & Gardner, H. (2008). Disciplining the mind. Educational Leadership, 65(5), 14–19.
Marton, F. & Säaljö, R. (1976). On qualitative differences in learning – II: outcome as a function of the learner’s conception of the task. British Journal of Educational Psychology, 46, 115–127.
Noddings, N. (2008). All our students thinking. Educational Leadership, 65(5), 8–13.
Singer, S.R., Hilton, M.L. & Schweingruber, H.A., Eds. (2005). America’s Lab Report: Investigations in High School Science. Washington, D.C.: National Academies Press.