TEST ITEM ANALYSIS OF READING COMPREHENSION EXAMINATION FACULTY OF TEACHERS AND TRAINING EDUCATION

Authors

  • Viator Lumban Raja Universitas Katolik Santo Thomas Medan

DOI:

https://doi.org/10.54367/kairos.v4i1.847

Abstract

It is not uncommon to put a blame on the students when they fail in the semester examination. The examiner or the one who constructs the test is rarely blamed or questioned why such a thing can happen. There is never a question whether the test is valid or reliable. In other words, the test itself is never evaluated in order to know if it meets the level of difficulty and power of discrimination. Madsen (1983: 180) says that item analysis tells us three things: (1) how difficult each item is, (2)whether or not the question discriminated or tells the difference between high and low students, (3) which distracters are working as they should.  This reading comprehension examination consists of 44 items, 35 items of reading comprehension and 9 items of vocabulary. The number of test takers are 18 students. The result of the analysis shows that only 5 students (27.7%) can do the test within average, meaning they can answer the test 50% correct of the total test items. This belongs to moderate category, not high nor excellent. Of the 44 test items, 33(75%) are bad items in that they do not fulfill one or both of the requirements concerning the level of difficulty and power of discrimination. And only 11 items (25%) meet the requirements of level of difficulty and power of discrimination. Regarding the distracters, there are 20 items (45.45%) whose distracters are not chosen either one or two. There are two items (4.54%), 25 and 34, the correct answer of which is not chosen by the test takers, including the high and low group. In short, these 20 items needs revising in term of distracters. Revision is made to those items whose distracters are not chosen and those which do not fulfill the requirements of level of difficulty and power of discrimination. Distracters which look too easy are changed, and those which are not totally chosen are revised. 

References

Alyousef, S.2005. “Teaching Reading Comprehension to ESL/EFL Learnersâ€, The Reading Matrix Journal, 5(2) 144-150

Ardhana, Wayan, 1987. Bacaan Plihan Dalam Metode Penelitian Pendidikan. Jakarta: Depdikbud, Dikti.

Ary, Donald. et.all 1982. Introduction to Research in Education. New York: Holt Rineheart and Winston.

Cameron,L.2001. Teaching Language to Young Learners. Cambridge: Cambridge University Press.

Copperud, Carol. 1979. The Test Design Handbook. Englewood Cliffs: Educational Publicationl, Inc.

Gronlund, Norman E. 1985. Measurment and Evaluatikon in Teaching. New York: Macmillan Publishing Company.

Harris, David P. 1977. Testing English as a Second Language. New Delhi: Tata McGraw-Hill Publishing Company.

Madsen, Harold S. 1983. Technique in Testing. Oxford: Oxford University Press

Pebriawan, I. 2015. “The Correlation Between VocabularyMastery and Students’ Reading Comprehension†Journal FKIP UNILA, 4(7), 123-144

Saleemi, Anjum P. 1988. “Language Testing: Some Fundamental Aspectsâ€, English Teaching Forum, Vol. XXVI, January 1988

Tuckman, Bruce W. 1975, Measuring Educational Outcomes: Fundamentals of Testing. New York: Harcourt Brace Jovanovich, Inc.

Valette, R.M. 1977. Modern Language Testing. New York: Harcourt Brace Jovanovich,Inc.

Published

2020-07-28

How to Cite

Lumban Raja, V. (2020). TEST ITEM ANALYSIS OF READING COMPREHENSION EXAMINATION FACULTY OF TEACHERS AND TRAINING EDUCATION. Kairos English Language Teaching Journal, 4(1), 52–65. https://doi.org/10.54367/kairos.v4i1.847

Issue

Section

Artikel