top of page
screen-shot-2019-11-07-at-5-05-42-pm_ori
download_18_orig.png
screen-shot-2019-11-07-at-5-08-02-pm_ori

                                   RESULTS, ANALYSIS & CONCLUSIONS

This criterion assesses the extent to which you have collected, recorded, processed, and interpreted
the data in ways that are relevant to your research question. The patterns in the data are correctly
interpreted to reach a valid conclusion

If there is insufficient data then any treatment will be superficial. You need to recognize the potential for such a lack and revisit the method before you arrive at the data collection or analysis. Alternatively, a lack of primary data could be supplemented by the use of secondary data from data banks or simulations to provide sufficient material for analysis.

Results, analysis, and conclusion (6)

Any treatment of the data must be appropriate to the focus of the investigation in an attempt to answer your research question. The conclusions drawn must be based on the evidence from the data rather than on assumptions. Given the scope of the internal assessment and the time allocated, it is more than likely that variability in the data will lead to a tentative conclusion and may identify patterns or trends rather than establishing causal links. This should be recognized and the extent of the variability be considered in the conclusion.

Where possible, the variability should be demonstrated and explained, and its impact on the conclusion fully acknowledged. Please note, by “conclusion”, is meant a deduction based on the direct interpretation of the data such as “What does the graph show?” or “Does any statistical test used to support the conclusion?” An overview of the data in the light of the broader context will be assessed in the criterion for discussion and evaluation.

Conclusion:

  • The conclusion given is correct and clearly supported by the interpretation of the data.

  • Key data from the analysis is given and trends in the data are discussed.

  • The extent to which the hypothesis is supported by the data is explained (avoiding “proves”).

  • The level of support (strong, weak, none, or inconclusive) for the hypothesis/ conclusion is identified, correct and justified.


The conclusion

  • starts with one (or more) paragraphs in which you draw conclusions from results, and state whether or not the conclusions support the hypothesis.  

  • clearly related to the research question and the purpose of the experiment.  

  • uses the expressions ‘confirmed by the data’ or ‘refuted by the data’ rather than ‘right,’ ‘wrong,’ or 'proven.'

  • provides a brief explanation as to how you came to the conclusion from the results.  In other words, sum up the evidence and explain observations, trends, or patterns revealed by the data. 


Scientific Context:

  • The scientific explanation for the results is described.

  • A comparison is made with published data and theoretical texts (with citations).


The results of the experiment should

  • be explained using accurate and relevant science.  

  • should compare the results of your investigation with what would be expected; reference published data or theoretical texts.

  • compare the conclusions with published research or with the general scientific consensus among scientists about the research question.  Do your conclusions conform to the consensus or are they unexpected?


 It is not necessary to find an exact same investigation with the exact same results, it is possible to compare findings with another investigation that is different but with results that either confirm or refute those of the student's investigation.

 

Results, Analysis and Conclusions Rubric

Results, analysis, and conclusion

Students often lose marks for the following errors.

  • Tables and graphs are not labeled correctly. Tables should have an adequate title and appropriate headings are needed. Axes of graphs should be labeled and units included.

  • Putting units in the cells of a table and not in the column or row headings where they should be.

  • Reporting data to a varying number of decimal places within the same column or row. In a table, for example, the temperature data and dissolved oxygen data may have different numbers of decimal places, but all the temperature readings must have the same number of decimals.

  • It is not possible to carry out a good analysis when there is insufficient data. If the design calls for one pH sample from each of five locations in a stream, then there is no significant analysis that can be carried out with these data and therefore you are likely to perform poorly. Five repeats at each site would have been necessary for good data analysis.

  • Data are often unprocessed. It is expected that you do something with your data (e.g. calculate indices, averages, standard deviations, and so on). Statistical techniques such as Chi-squared, regressions, the t-test can also be done: although these are not specifically required, they do provide a way to achieve full marks in this aspect, although to achieve full marks they must be done well (Appendix).

  • When processing data, accuracy is sometimes increased through mathematical means. Processed data should be to the same level of accuracy as raw data. If a mean is calculated from numbers with two decimal places, for example, this should not be reported to four decimal places.

Presentation of processed data usually takes the form of a scatter plot, pie chart, histogram, bar chart, or some other method of visually portraying the analyzed data. Do not just plot unprocessed/raw data (e.g. if you take temperature readings at 10 different sites on a river, do not just draw a graph of these – plot mean values).

The data you collect must be recorded at the level of accuracy made possible by the precision of the equipment you are using. For example, if plant lengths are measured with a ruler that reports millimeters, the average of these data should not be reported to 8 decimal places in the data tables, but rather to the nearest millimeter. Similarly, if a light meter records to two decimal places, then this is the level of accuracy that should be used when calculating means.

In the conclusion, marks are often lost by not being specific enough. You should cite your data in your conclusions (e.g. if you conclude that in a study of soil moisture along a slope, there is a trend towards increasing moisture downslope, this should be illustrated with the actual data. There should be a brief explanation as well: for example, ‘The increase in soil moisture down the slope may be due to run-off and infiltration.’

                                             DISCUSSION & EVALUATION

​This criterion assesses the extent to which you discuss the conclusion in the context of the environmental issue, and carries out an evaluation of the investigation.

​This criterion requires you to reflect on your study. In the first instance, you should evaluate the methodology of your research, discussing the strengths, weaknesses, and limitations of the process. Any research project at this level is likely to be influenced by limitations, and the focus here is to identify these and to reflect on how they have impacted on the conclusion of the study. It might also be that weaknesses in the experimental design that became evident as a result of carrying out the study are discussed here. It is worth noting that although there is no requirement that the report is organized according to the headings of the criteria, consideration of the validity of the data will be assessed as part of the conclusion, and evaluation of the methodology will be assessed in this discussion criterion.

You must also reflect on the outcomes of your investigation in relation to the broader environmental issue, which was raised at the beginning of the internal assessment process. To what extent do your findings support or contrast with information available in the literature? What reasons can you suggest for any similarities or differences? It is at this stage that the focused research question is now widened to re-address the broader environmental issue or concern.

Discussion and evaluation (6)

Discussion
You are required to evaluate your conclusion with respect to your Research Question and Environmental Issue. ​

  • reflect on the outcomes of your investigation in relation to the broader environmental issue, which was raised at the beginning of the internal assessment process.

  • to what extent do your findings support or contrast with information available in the literature? What reasons can they suggest for any similarities or differences?


It is at this stage that the focused research question is now widened to re-address the broader environmental issue or concern.

Evaluation
Evaluate the methodology of your research,

  • discuss the strengths, weaknesses, and limitations of the impacted on the conclusion 

  • discuss weaknesses in the experimental design that became evident as a result of carrying out your study

  • the validity of the data will be assessed as part of the conclusion, and evaluation of the methodology will be assessed in this discussion criterion.


Limitations:

  • The variation in results is reported, showing the strength of the conclusion.

  • The appropriateness of the apparatus in obtaining relevant data is commented on.

  • Weaknesses in the methodology are discussed.

  • The reliability of the data is commented on.

  • The quantity of the data is commented on (both MV and RV).

  • The precision, accuracy, and uncertainty in the data is commented on.

  • Outlier data or irregularities in the data are addressed.


The design and method of the investigation must be commented upon as well as the quality of the data. 

  • consider how large your errors or uncertainties are in the results.

  • how confident are you in the results

  • are you fairly conclusive, or are other interpretations/results possible  

 
Identify and discuss significant errors and limitations that could have affected the outcome of the experiment.  

  • were there important variables that were not controlled?  

  • were there flaws in the procedure which could affect the results?

  • are measurements and observations reliable?  

  • was there a lack of replication?


The emphasis in this section should be on systematic errors, not the random errors that always occur in reading instruments and taking measurements.  Identify the source of error and if possible, tie it to how it likely affected the results. 
 

Acceptable Example: 
“Because the simple calorimeter we used was made from a tin can, some heat was lost to the surroundings—metals conduct heat well.  Therefore, the value we obtained for the heat gained by the water in the calorimeter was lower than it should have been.”

Unacceptable Examples:  

  • "The test tubes weren’t clean.”

  • “Human error.”

 
You must not only list the weaknesses but must also appreciate how significant the weaknesses are. Comments about the precision and accuracy of the measurements are relevant. When evaluating the procedure used, the specific look at the processes, use of equipment, and management of time.
 
RULE OF THUMB: Every error/ weakness addressed needs and effect on the data and a specific improvement

Suggestions:

  • ​​Where limitations are determined to be significant, specific improvements are proposed.

  • Improvements effectively and specifically address the limitations.

  • Improvements are given which are possible within the context of a school laboratory.

  • An additional research question is stated with clear IV and DV.

  • The research questions are an extension of the conclusion and evaluation.

  • A short explanation for the question is given to establish its importance and relevance.


​Suggestions for improvements should be based on the weaknesses and limitations identified.
Modifications to the experimental techniques and the data range can be addressed here. The modifications proposed should be realistic and clearly specified. Suggestions should focus on specific pieces of equipment or techniques used.

​It is not sufficient to state generally that more precise equipment should be used. Vague comments such as “I should have worked more carefully” are not acceptable. 

 

Discussion and Evaluation Rubric

Discussion and evaluation

  • The best reports cite literature, indicate how close data is to what might be expected, contain discussion about why data did not support the theory, and include comments about the relative reliability of the data. The calculation of standard deviations allows discussion about the reliability of the data. Although it is not intended that the discussion should turn into a dissertation of several pages, there does need to be a critical look at the quality of the data and how it relates to what is known.

  • A good discussion should identify patterns in the data (or comment on their absence), place the research in a context that relates it to theory and/or research, and assess the quality of the data generated. This is much easier to do if the planning and results sections have been done well. If the research question is tightly focused, and there is sufficient data to address the question, then a discussion is more likely to produce interesting insight. For example, if you have carried out a study of the relationship between temperature and dissolved oxygen at sites above and below a pollution source, you should address the quality of the data. Is it reliable? Why, or why not? This is where having means and standard deviations can be useful.

Standard deviation (which can easily be worked out on a scientific calculator) shows the variation in the data: if there is a very large standard deviation, you would be expected to comment on this fact and interpret it (i.e. large variation means that the data are less reliable).

  • The discussion should be thought-provoking and will almost certainly be the most challenging (and perhaps lengthiest) part of the report. Are there important differences among the data? Are there trends? Do these trends support/refute accepted theory? Are the standard deviations in the data so huge as to make diflerences meaningless? Are there anomalies in the data? These should be discussed, and if they are to be ignored or excluded from the analysis, a case for this decision should be made. Were the samples collected without significant bias? Are there literature values that can be used for comparison? If there are, these should be mentioned. If these are non-existent or unavailable, a note to this effect should be included.

  • You need to look at your method critically and offer improvements. Many students, however, miss the most obvious improvement (i.e. collection of more data, repeating the experiment, and calculation of averages). Potential marks are generally lost by making suggestions that are either too simple or unrealistic. In the evaluation, data quality issues that may have been noted in the discussion should be addressed. Was the standard deviation very high? How can it be reduced? Is the data representative? If not, how can that be addressed? What improvements will address the issues that have been identified? All these questions should be answered in this section of the report.

download_17.png
bottom of page