Volume 14 (2025) Download Cover Page

Differences of Half-Split Equations on Estimating Test-Reliability Coefficient

Article Number: e2025015  |  Available Online: January 2025  |  DOI: 10.22521/edupij.2025.14.15

Omar Saleh Bani Yassin , Aiman Mohammad Freihat , Sabri Hassan Al-Tarawneh

Abstract

Background/purpose. This study aimed to investigate the differences among the equations used in estimating the reliability coefficient using the half-split method. These equations demonstrate Spearman-Brown’s, Rulon’s, Guttman’s, Mosier’s, Flanagan’s, and Horst's.

 

Materials/methods. The study instrument was a 43-item scale for evaluating the computerized mathematics curriculum for the tenth grade in southern Jordan. It was applied to a sample of 303 male and female teachers and educational supervisors.

 

Results. The results showed that all values of the reliability coefficients estimated in the six equations were acceptable. In addition, the best equation for estimating the half-split reliability coefficient was the Spearman-Brown equation, followed by two equations by Flanagan and Rulon.

 

Conclusion. Considering the results of the current study, the researchers do not recommend using Mosier’s equation because it gave the lowest reliability-coefficient value.

Keywords: half-split, reliability, tests

References

Abu-Saree’, R. A. (2004). Data analysis using SPSS. Amman, Dar Al Fikr.

Aiken, R. (2003). Psychological testing and assessment. Boston: Allyn and Bacon.

Al-Ghareeb, R. (1998). Psychological and educational evaluation and measurement. Anglo-Egyptian Library.

Allam, S. A. D. (2010). Educational and psychological measurement and evaluation: Its basics, applications, and contemporary trends. House of Arab Al-Fikr.

Al-Majeed, S. (2010). Psychological tests (models). Safa’s House for Publishing and Distribution.

Al-Majeed, S. (2013). Foundations of the construction of psychological and educational tests and scales. Debono Center for Teaching Thinking.

Al-Nabhan, M. (2004). Fundamentals of measurement in behavioral sciences.  Dar Al Shorouk for Publishing and Distribution.

Al-Qatawna, A. (2015). Reliability in the tests is a spoken reference in mathematics for the tenth grade according to the classical theory and the theory of item response according to the two-teacher model: A comparative study (Unpublished Master Thesis). Mu’tah University, Karak, Jordan.

Al-Tarawneh, S., & Al-Qadi, H. (2016). Evaluation of the 10th grade computerized mathematics curriculum from the perspective of the teachers and educational supervisors in the Southern Region in Jordan. Journal of Education and Practice, 7 (2), 39–47.

Al-Tarawneh, S. (2022). Principles of measurement and evaluation.

Al-Turairi, A. (1997). Psychological and educational measurement: Its theory, foundations, and applications. Riyadh: Al-Rushed Library for Publishing and Distribution.

Al-Zahrani, S. (2000). Comparison of methods for estimating reliability in fun-telling measurement. Makah, Saudi Arabia: Umm Al-Qura University.

Crocker, L., & Algina, J. (1986). Introduction to classical and modern test theory. Holt Rinehart and Winston. 

Feldt, L. S., Woodruff, D. J., & Salih, F. A. (1987). Statistical inference for coefficient Alpha. Applied Psychological Measurement, 11(1), 93-103. https://doi.org/10.1177/014662168701100107

Hakstain, A. R., & Whalen, T. E. (1976). A k-sample significance test for independent Alpha coefficients. Pyschometrika, 41(2)19-231. https://doi.org/10.1007/BF02291840

Ismail, B. (2004). Reference in psychological measurement. Anglo-Egyptian Library.

Ismail, H. (2014). Extracting the psychometric properties of the teacher quality standards scale on a sample of teachers in the state of El Oued (Unpublished master’s thesis). University of Blida.

Kim, S., & Feldt, L. (2008). A comparison of tests for equality of two or more independent Alpha coefficients.  Journal of Educational Measurement, 45(2), 179-193. https://doi.org/10.1111/j.1745-3984.2008.00059.x

Melhem, S. M. (2002). Measurement and evaluation in education and psychology (2nd ed.). Dar Al-Masirah.

Onn, D. (2013). Classical test theory versus item response theory: An evaluation of the comparability of item analysis result, Retrieved from https://ui.edu.ng/sites/default/files.

Saeed, M. (2015). Modern trends in educational measurement and evaluation: Achievement file. Dar Al Nahda Al Arabiya, Cairo.

Saeed, M. (2019). Actuality of secondary school exam scores in predicting the achievement of first-year students at the Faculty of Education, Beni Suef University. Arab Journal of Measurement and Evaluation,1(2) -84. https://doi.org/10104 10.21608/AJME.2020.200201

Saeed, M. (2023). Shifting from learning assessment to assessment for learning. Journal of the Faculty of Education, Beni Suef University, 20(11), 1–11. https://doi.org/10 10.21608/JFE.2023.337355.

Stanley, J. C., & Hopkins, K. D. (1998). Educational and psychological measurement and evaluation.  Prentice-Hall.

Thompson, B., Green, S., & Yang, Y.  (2010). Assessment of the maximal half-split coefficient to estimate reliability. Educational and Psychological, 70(2) 232–251. https://doi.org/10.1177/0013164409355688.

Trevisan, S., Sax, G., & Michael, W. (1991). The impact of student’s ability on test actuality and reliability.  Educational and Psychological Measurement, 51, 829- 837.

 Walker, D. (2006). A comparison of the Spearman-Brown and Flanagan-Rulon formulas for half-split reliability under various variance parameter conditions. Journal of Modern Applied Statistical Methods, 5(2), 443–451. http://digitalcommons.wayne.edu/jmasm/vol5/iss2/18

Zare’, N. (2021). Comparison of the coefficients of the reliability of the test scores under sets of conditions: Monte Carlo simulation study. Educational journal, 2(88), 1108- 1174.    

Zimmerman, D. W., Williams, R. H., & Symons, D. L. (1984). Empirical estimates of the comparative reliability of matching tests and multiple-choice tests. Journal of Experimental Education, 52(3), 179–182. https://doi.org/10.1080/00220973.1984.1101189

Announcement

EDUPIJ Citation Metrics

EDUPIJ News!

ANNOUNCEMENT

Message from the Editor-in-Chief,

We would like to inform our authors, reviewers, and stakeholders that EDUPIJ has entered Scopus’s re-evaluation process, as officially communicated (dated 2025-12-09). This assessment is a standard quality assurance practice applied to indexed journals and aims to ensure sustained editorial quality, ethical integrity, and alignment with Scopus’s evolving evaluation framework.

EDUPIJ welcomes this process and views it as an opportunity to further consolidate its editorial governance, strengthen publication ethics, and enhance peer-review rigor.

Strengthening Editorial and Ethical Standards

To ensure full compliance with international best practices and to proactively address Scopus evaluation criteria, the following measures have been formally implemented:

1. Selective Acceptance Policy for 2026 and Beyond

In response to increased submission volume in 2025 (see Journal Metrics: https://edupij.com/index/sayfa/18/journal-metrics), EDUPIJ will adopt a more selective acceptance policy starting in 2026 and continuing in the years ahead. In doing so, the geographic distribution of authors will also be taken into account to ensure that editorial decisions are informed by transparent, year-to-year submission and authorship patterns. Acceptance rates will be carefully aligned with editorial capacity to ensure a rigorous double-blind peer review process supported by active reviewer engagement and uncompromised editorial oversight. This policy reflects our commitment to quality-driven growth rather than volume-based expansion, and it directly addresses observations that the geographic spread of authors has changed significantly during the same period by ensuring that any such shifts are systematically monitored and considered within our quality assurance framework.

In line with this approach, we have adopted a Publication Volume Policy, enacted on 2025-12-07, which establishes clear upper limits on annual publication volume and defines a framework for maintaining EDUPIJ’s output at sustainable, long-term levels, comparable to pre-2025 volumes under normal conditions. This policy is also publicly available at https://edupij.com/index/sayfa/41/publication-volume-journal-metrics-policy.

From 2026 onwards, our objective is to maintain a moderate and stable annual volume, prioritising quality and selectivity rather than growth.

2. Enhanced Author and Manuscript Integrity Screening

All submissions now undergo mandatory integrity checks, including automated screening for retraction history and potential ethical risks prior to peer review. These procedures are designed to safeguard originality, research integrity, and transparency at every stage of the editorial process.

3. Establishment of a Publication Ethics Review Committee

A dedicated Publication Ethics Review Committee has been constituted to evaluate high-risk submissions, oversee ethical investigations when necessary, and ensure consistent adherence to COPE guidelines and internationally recognized publishing standards. All ethical decisions are documented and managed through a structured, transparent process.

Ongoing Commitment:

EDUPIJ remains firmly committed to rigorous double-blind peer review, transparent editorial policies, responsible scholarly communication, and the advancement of high-quality educational research at an international level.

Our journal continues to demonstrate steady progress in terms of international visibility, indexing coverage, and citation performance. We are confident that the Scopus re-evaluation process will further support the journal’s long-term sustainability and academic impact.

We sincerely thank our authors, reviewers, and the broader scholarly community for their continued trust and contribution to EDUPIJ.

Sincerely,
Prof. Turgut Karaköse, Editor-in-Chief

 

Posted: 2025-12-09