Wednesday, July 31, 2019

Guidelines for Effective Grading


  1. Describe grading procedures to students at the beginning of instruction
  2. Clarify that course grade will be based on achievement only
  3. Explain how other factors (effort, work habits, etc.) will be reported
  4. Relate grading procedures to intended learning outcomes
  5. Obtain valid evidence (test, etc.) for assigning grades
  6. Try to prevent cheating
  7. Return and review all test results as soon as possible
  8. Properly weight the various types of achievements included in the grade
  9. Do not lower an achievement grade to tardiness, weak effort or misbehavior
  10. Be fair. Avoid bias. When in doubt , review the evidence. If still in doubt, give the higher grade.

Saturday, July 27, 2019

Developing of a Grading and Reporting System


  • Achievement reports should be separate from effort expended
  • Should be developed cooperatively (parents, students, school personnel) in order to ensure a more adequate and understandable system

Grading and Reporting System should be…

  • based on clear statement of learning objectives
  • consistent with school standards
  • based on adequate assessment
  • based on the right level of detail
  • provided with parent-teacher conference

Assigning Letter Grades and Computing Grades

  • Grades assigned to students must include only achievement
  • If achievement and effort and combined in some way, grades would mean different things for different individuals
  • Grades reflected on report cards are numbers or numerical quantities arrived at after several data on the students’ performance are combined

Types of Grading

1. Norm-Referenced Grading

  • Reflects relative performance (i.e. score compared to other students)
  • In this system,
  • Grade (like a class rank) depends on what group the student is in, not just his own performance
  • Typical grade may be shifted up or down, depending on group’s ability
  • Widely-used because much classroom testing is norm-referenced

In norm-referenced grading….

  • An outside person can decide which student in that group performed best 
  • Takes into account circumstances beyond the student’s control such as poor teaching, bad tests etc. since these would affect all the students equally, so all performance would drop but relative standing would stay the same
  • An outside evaluator has little additional information about what a student actually knows since this will vary with class
  • Assumes sufficient variability among student performance that the difference in learning between them justifies giving different grades

2. Criterion-Referenced Grading

  • Reflects absolute performance (i.e. score compared to specified performance standards)

In this system,

  • Grade does not depend on what group the student is in, but only on his own performance compared to a set of performance standards
  • Grading is a complex task because grades must clearly define the domain, justify the performance standards and be based on criterion-referenced assessment
  • Conditions are hard to meet except in complete mastery learning settings

In criterion-referenced grading…

  • An outside evaluator knows only that the student has reached a certain level or set of objectives
  • The grade will always mean the same thing and will not vary from class to class
  • Outside factors might influence the entire class and performance may drop
  • No way to tell from the grading who the best students are, only that certain students have achieved certain levels
  • Criteria are known from the beginning that allows student to take responsibility

3. Learning Ability or Improvement Performance Grading

  • Reflects ability or improvement performance (i.e. score compared to learning potential or past performance 

In this system,

  • Grade are inconsistent with a standard-based system because each child is his/her own standard
  • Reliably estimating learning ability is very difficult 
  • Cannot reliably measure change with classroom measures
  • Should only be used as a supplement



Friday, July 12, 2019

Giving and Reporting Learners' Grades


  • One of the more frustrating aspects of teaching since there are so many factors to consider and so many decisions to be made
  • The main aim is to provide results in brief, understandable form for varied users

Questions usually asked:

  • What should I count- just achievement or effort, too?
  • How do I interpret a student’s score?
  • Do I compare it to other student’s score, a standard of what they can do, or some estimate of what they can do?
  • What should my distribution of grades be, and how do I determine it?
  • How do I display students progress or strengths and weaknesses, to students and their parents?

Functions of Grading and Reporting Systems

Enhancing students’ learning 

  • clarifying instructional objectives for them showing students’ strengths and weaknesses
  • providing information on personal-social development
  • enhancing students’ motivation 
  • indicating where teaching might be modified

Reports to parents/ guardians

  • Inform parents and guardians of students on the progress of their wards
  • Communicate how well objectives were met so parents can better plan

Administrative and guidance uses

Help to decide 

  • promotion
  • graduation,
  • honors
  • athletic eligibility
  • reporting achievement to other schools or to employers
  • providing input for realistic educational, vocational and personal counseling

Grades and report cards should promote and enhance learning rather than frustrate and discourage students

Parent-teacher conferences are encouraged to effectively function as motivation for further learning


Types of Grading and Reporting Systems

Traditional letter-grade system

  • Students’ performance are summarized by means of letters
  • Is easy to understand
  • It is of limited value when used as the sole report because they end up being a combination of achievement, effort, work habits and behavior
  • Do not indicate patterns of strengths and weaknesses

Pass-Fail

  • Utilizes a dichotomous grade system
  • Either a student has complied and reached certain standards, which case he passed, or he failed to do so, and gets a failing mark
  • It does not provide much information
  • Students tend to work to the minimum (just to pass) and no grades are reflected until the mastery threshold is reached

Checklist of Objectives

  • Objectives of the course are enumerated and students’ level of achievement is indicated: Outstanding, Very Good, Good, Fair or Poor
  • A very detailed reporting system and tends to more informative for the parents and pupils 
  • Very time consuming to prepare
  • A potential problem is keeping the list manageable and understandable

 Letters to Parents/Guardians

  • Are useful supplement  to grades
  • Very time-consuming to prepare, the accounts of weaknesses are often misinterpreted by parents and guardians, and they are not characterized as systematic nor cumulative

 Portfolios

  • A set of purposefully selected work, with commentary by student and teacher
  • Are useful for showing student’s strength and weaknesses, illustrating range of students’ work, showing progress over time or stages of a project, teaching students about objectives/standards they are to meet

  Parent-Teacher Conferences

  • Requires parents of pupils come for a conference with the teacher to discuss the pupil’s progress
  • Are useful for a two-way flow of information and getting more information and cooperation from the parents






Wednesday, July 3, 2019

STATISTICAL CONCEPTS IN ASSESSMENT OF LEARNING


Statistical Techniques allow to describe the performance of our students and make proper scientific inferences about their performance

A. Measures of Central Tendency

  • Numerical values which describe the average or typical performance of given group in terms of certain attributes
  • Basis in determining whether the group is performing better or poorer than the other groups

The Mean

  • The mean is a single numerical measure of the typical or average performance of a group of students
  • Is defined as the sum of observations divided by the number of observations

The Median

  • The middlemost score
  • Is unaffected by extreme examination scores
  • is not necessarily one of the actual scores
  • The median is the most appropriate average to calculate when the data result in skewed distributions

The Mode

  • The most frequent score

B. Measures of Variability

  • While measures of central tendency are useful statistics for summarizing the scores in a distribution, they are not sufficient. Two distributions may have identical means and medians, for example, yet be quite different in other ways. 

For example, consider these two distributions:

  • GROUP A: 19, 20, 25, 32, 39
  • GROUP B: 2, 3, 25, 30, 75
  • Indicate or describe how spread the scores are
  • The larger the measure of variability, the more spread the scores are, the group is said to be heterogeneous
  • The smaller the measure of variability, the less spread the scores are, the group is said to be homogeneous

The Range

  • Represents the distance between the highest and lowest scores in a distribution.
  • Because it involves only the two most extreme scores in a distribution, the range is but a crude indication of variability. 

The Standard Deviation

  • The most useful index of variability.
  • It is a single number that represents the spread of a distribution
  • Measure of average deviation or departure of the individual scores from the mean
  • The more spread out scores are, the greater the deviation scores will be and hence the larger the standard deviation.
  • The closer the scores are to the mean, the less spread out they are and hence the smaller the standard deviation. 

Quartile Deviation or Semi-interquartile Range

  • Defined as one half the difference between quartile 3 (75th percentile) and quartile 1 (25th quartile) in a distribution
  • Counterpart of the median
  • Used when the distribution is skewed


B. Measures of Relative Position

Standard Scores

  • use a common scale to indicate how an individual compares to other individuals in a group. 
  • These scores are particularly helpful in comparing an individual’s relative position on different instruments.
  • The two standard scores that are most frequently used in educational research are z scores and T scores.

z Scores

  • the simplest form of standard score 
  • expresses how far a raw score is from the mean in standard deviation units

T  Scores

  • are z scores expressed in a different form. 
  • To change a z score to a T score, simply multiply the z score by 10 and add 50.
  • T = 10z + 50

Percentile Ranks

  • A percentile in a set of numbers is a value below which a certain percentage of the numbers fall and above which the rest of the numbers fall.
  • The median is the 50th percentile. 
  • Other percentiles that are important are the 25th percentile, also known as the first quartile (Q1 ), and the 75th percentile, the third quartile (Q3 ). 
  • To solve for percentile rank, add the number of students scoring below the value and the number of students scoring equal to the value divided by the total number of test takers.

Stanine Scores

  • Tell the location of a raw score in a specific segment in a normal distribution which is divided into 9 segments
  • Stanines 1, 2 and 3 reflect below average performance; 4, 5 and 6 reflect average performance; and 7, 8 and 9 reflect above average performance


SHAPES, DISTRIBUTIONS, and DISPERSION OF DATA

A. Shape

  • Normal Distribution
  • Rectangular Distribution
  • U-Shaped Curve

B. Kurtosis

  • Leptokurtic
  • Mesokurtic
  • Platykurtic

C. Unimodal, Bimodal and Multimodal Distributions of Test Scores

  • Unimodal – one most common score
  • Bimodal – two most common score
  • Multimodal – more than two most common scores

D. Skewness

  • Positively Skewed Distribution (mean > median > mode
  • Negatively Skewed Distribution ( mode > median > mean)


Wednesday, June 5, 2019

ITEM ANALYSIS

ITEM ANALYSIS AND VALIDATION

Benefits of Item Analysis

  • It provides useful information for class discussion of the test
  • It provides data which helps students improve their learning
  • It provides insights and skills that lead to the preparation of better tests in the future

Item Difficulty

  • the number of students who are able to answer the item correctly divided by the total number of students

Item Difficulty = number of students with correct answer/total number of students

Difficulty Index

Index of Discrimination

  • tells whether it can discriminate between those who know and those who do not know the answer
  • The difference between the proportion of the upper group who got an item right and the proportion of the lower group who got the item right

Index of Discrimination=DU - DL



Friday, May 31, 2019

Constructing Essay-Type of Tests

 Essays

  • Classified as non-objective tests allow for assessment of higher-order thinking skills

Rules in Constructing Essay-Type of Tests

1. Phrase the questions in such a way that students are guided on the key concepts to be included

2. Inform the students on the criteria to be used for grading their essays. 

3. Put a time limit on the essay test

4. Decide on your essay grading system prior to getting the essays of your students

5. Evaluate all of the students’ answer to one question before proceeding to the next question

6. Evaluate answers without knowing the identity of the writer

7. Whenever possible, have two or more persons grade the answer


Saturday, May 25, 2019

Constructing Multiple Choice Tests


 Multiple Choice Tests

  • Each item in a multiple choice test consists of two parts:

  1. The stem
  2. The options

  • Distracters are chosen in such a way that they are attractive to those who do not know the answer 

Rules in Constructing Multiple Choice Tests

1. Do not use unfamiliar words, terms and phrases. The ability of the item to discriminate or its level of difficulty should stem from the subject matter rather than from the wording of the question.

Example:

  • In what year did Jose Rizal meet his demise?
  • Rules in Constructing Multiple Choice Tests

2. Do not use modifiers that are vague and whose meanings can differ from one person to the next such as much, often, usually etc.

Example:

  • Much of Muslim-Filipino men are polygamous.
  • Rules in Constructing Multiple Choice Tests

3. Avoid complex or awkward word arrangements. Avoid use of negatives in the stem as this may add unnecessary comprehension difficulties.

Example:

  • As President of the Republic of the Philippines, Corazon Cojuangco Aquino would stand next to which President of the Philippine Republic subsequent to the 1986 EDSA Revolution?

4. Do not use negative or double negatives such as such statements tend to be confusing.

Example:

  • What does the statement, “Developmental patterns acquired during formative years are not unchangeable” imply?

5. Each item should be as short as possible; otherwise, you risk testing more for reading and comprehension skills

6. Distracters should be equally plausible and attractive

Example:

The short story “May Day’s Eve” was written by which Filipino author?

  1. Genoveva Edrosa Matute
  2. Nick Joaquin
  3. Edgar Allan Poe
  4. Robert Frost

7. All multiple choice options should be grammatically consistent with the stem

Example:

Immigration will most likely,

  1. Increases the population
  2. Improved economy
  3. Alleviate poverty
  4. Stopping of crimes

8. The length, explicitness or degree of technicality of alternatives should not be determinants of the correctness of answer

Example:

If the three angles of two triangles are congruent, then the triangles are:

  1. Congruent whenever one of the sides of the triangles are congruent
  2. Similar
  3. Equiangular and therefore must also be congruent
  4. Equilateral if they are equiangular

9. Avoid stems that reveal the answer to another stem.

10. Avoid alternatives that are synonymous with others or those that include or overlap others

Example

What causes ice to transform from solid state to liquid state?

  1. Change in temperature
  2. Change in pressure
  3. Change in chemical composition
  4. Change in heat levels

11. Avoid presenting sequenced items in the same order as in the text or reference material.

12. Use the none of the above option only when the keyed answer is totally correct. When choice of the best response is intended, none of the above is not appropriate since the implication has already been made that the correct response may be partially inaccurate

13. Note that the use of “all of above” may allow credit for partial knowledge.


Tuesday, May 14, 2019

Constructing a True or False Test

True or False Test

  • Binomial-choice tests are tests that have only two options such as true or false
  • Students have 50% chance of getting the correct answer by guess work
  • A modified true or false test can offset the effect of guessing 

Rules in Constructing True-False Items

1. Do not give a hint inadvertently on the body of the question

Example: 

  • The Philippines gained its independence in 1898 and celebrated its centennial year in 2000.
  • Rules in Constructing True-False Items

2. Avoid using the words always, never, often and other adverbs that tend to be either always true or always false.

Example: 

  • Christmas always falls on a Sunday because it is a Sabbath day. 
  • Rules in Constructing True-False Items

3. Avoid long sentences as these tend to be true. Keep sentences short

Example: 

  • Tests need to be valid, reliable and useful though it would require a great amount of time and effort to ensure that tests possess these test characteristics.

4. Avoid trick statements with some minor misleading word or spelling anomaly, misplaced phrases etc.

Example: 

  • The principle of the school is Mr. Albert Panadero.
  • Rules in Constructing True-False Items

5. Avoid quoting verbatim from reference materials or textbooks

6. Avoid grossly disproportionate number of either true or false statements or even patterns in the occurrence of true or false statements


Wednesday, April 10, 2019

Planning a Test and Construction of Table of Specifications

 

Steps in Planning a Test

  1. Identifying test objectives
  2. Deciding on the type of objective test to be prepared
  3. Preparing a TOS
  4. Constructing the draft test items
  5. Try-out and validation

1. Identifying Test Objectives

  • An objective test for it to be comprehensive, must cover the various levels of Bloom’s Taxonomy.

2. Deciding on the Type of Objective Test

  • The test objectives dictate the kind of tests that will be designed and constructed

3. Preparing a Table of Specifications

Table of Specifications-a test map that guides the teacher in constructing a test

  • Ensure balance between items that assess lower level thinking skills and higher level thinking skills, easy and difficult items

4. Constructing the test items

  • It is advised that the actual number of items to be constructed in the draft double the desired number of items

5. Item analysis and try-out

  • The test draft is tried out to a group of pupils to determine the:
  • Item characteristics through item analysis
  • Characteristics of the test itself-validity, reliability and practicality


Tuesday, March 5, 2019

Refresher Course for Licensure Examinations for Teachers: Assessment of Learning Part 4

 Properties of Assessment Methods

A. VALIDITY

  • refers to the appropriateness, correctness, meaningfulness and usefulness of the specific conclusions that a teacher reaches regarding the teaching-learning situation
  • the degree to which the assessment instrument measures what it intends to measure
  • the most important criterion of a good assessment instrument

Types of Validity

1. Content-Validity- refers to the content and format of the instrument

  • Do students have adequate experience with the type of task posed by the item?
  • Did the teachers cover sufficient materials for most students to be able to answer the item correctly?
  • Does the item reflect the degree of emphasis received during instruction?

2. Face Validity- refers to the outward appearance of the test

3. Criterion Validity- refers to the degree to which information provided by a test agrees with information obtained on other, independent test

4. Construct Validity- refers to the degree to which the totality of evidence obtained is consistent with theoretical expectations

B. RELIABILITY

  • refers to the consistency, dependability or stability of an assessment method
  • refers to the consistency of scores obtained by the same person when retested using the same instrument, its parallel or when compared with others students who took the same test

Methods of Checking Reliability

1. Test-Retest Method - involves administering the same instrument twice to the same group of individuals after a certain time interval has elapsed.

2. Equivalent-Forms Method- involves administering two different, but equivalent, forms of an instrument to the same group of individuals at the same time.

3. Test-retest with Equivalent Forms - involves giving parallel forms of tests with increased time interval between forms

4. Internal-Consistency Method- involves comparing responses to different sets of items that are part of an instrument (Split-half, Kuder-Richardson, Cronbach Alpha)

5. Scoring observer agreement- compare scores obtained by two or more scorers or observers

C. BALANCE

  • Involves setting of targets in all domains of learning domains of intelligence
  • Makes use of both traditional and alternative assessment

D. FAIRNESS

  • involves student’s knowledge of the learning targets and assessment methods to be used; assessment of the learning process rather than the learning object; and freedom from biases
  • the key to fairness are as follows:
  • students have knowledge of learning targets and assessment
  • students are given equal opportunity to learn
  • students possess the pre-requisite knowledge and skills
  • students are free from teacher stereotypes
  • students are free form biased assessment tasks and procedures

E. PRACTICALITY and EFFICIENCY

  • includes familiarity, time efficiency and employability of the assessment method 

a. Teacher familiarity with the Method

  • The teacher should know the strengths and weaknesses of the method and how to use them

b. Time required

  • Includes construction and use of the instrument and the interpretation of the results
  • It is desirable to use the shortest assessment time possible that provides valid and reliable results

c. Complexity of the Administration

  • Directions and procedures for administrations and procedures are clear and that little time and effort is needed

d. Ease of scoring

  • use scoring procedures appropriate to your method and purpose

e. Ease of interpretation

  • Interpretation is easier if there was a plan on how to use the results prior to assessment

d. Cost

  • The less expense used to gather the information, the better

F. CONTINUITY

  • Assessment takes place in all phases of instruction. It could be done before, during and after instruction

G. AUTHENTICITY

  • Involves:

a. meaningful performance task

b. clear standards and public criteria

c. quality products and performance

d. positive interaction between assesse and assessor

e. emphasis on meta-cognition and self-evaluation

f. learning that transfers

H. COMMUNICATION

  • Assessment targets and standards should be communicated

I. POSITIVE CONSEQUENCES

  • Assessment should have positive consequence to students, that is it should motivate them to learn
  • Assessment should have a positive consequence on teachers, that is, it should help them improve the effectiveness of their instruction

J. ETHICS 

  • Participants in an assessment method should be protected from physical or psychological harm, discomfort or danger that may arise due to the testing procedure
  • Test results and assessment results should be confidential and should be known only by the student concerned and the teacher
  • There are instances in which it is necessary to conceal the objective of the assessment in order to ensure fair and impartial results. When this is the case, the teacher has a special responsibility to: 

a. determine whether the use of such techniques is justified by the educational value of the assessment, 

b. determine whether alternative procedures are available which do not make use of concealment; and 

c. ensure that students are provided with sufficient explanation as soon as possible.

Assessment should be fair and non-biased.


Tuesday, February 12, 2019

Refresher Course for Licensure Examinations for Teachers: Assessment of Learning part 3

 B. Appropriateness of Assessment Methods

1. Written-Response Instruments

  • include objective tests (multiple-choice, true-false, matching or short answer) tests, essays, examinations, and checklists
  • appropriate for assessing the various levels of the hierarchy of educational objectives

2. Product Rating Scales

  • used to rate products such as book reports, maps, charts, diagrams, notebooks, essays, and creative endeavors
  • to produce a product rating scale, the teacher must possess prototype products over his/her years of experience

3. Performance Tests

  • one of the most frequently used performance tests is the checklist which consists of a list of behaviors that make up a certain type of performance

4. Oral Questioning

  • appropriate when the objectives are to assess the student’s stock knowledge and to determine the student’s ability to communicate ideas
  • several factors need to be considered when using this assessment which could mask the student’s true ability

5. Observation and Self Reports

  • are useful supplementary assessment methods when used in conjunction with oral questioning and performance tests
  • include tally sheets and self-checklist



Thursday, January 17, 2019

Refresher Course for Licensure Examinations for Teachers: Assessment of Learning Part 2

 II. PRINCIPLES OF HIGH QUALITY ASSESSMENT

A. Clarity of Learning Targets

  • Assessment can be made precise, accurate and dependable only if what are to be achieved are clearly stated. 
  • Learning targets need to be stated in behavioral terms or terms which denote something which can be observed through the behavior of the student.

 Bloom’s Taxonomy of Educational Objectives

1. REMEMBERING

  • Recall previous learned information.

Keywords:

defines, describes, identifies, knows, labels, lists, matches, names, outlines, recalls, recognizes, reproduces, selects, states.

Examples:

  • Recite a policy. 
  • Quote prices from memory to a customer. 
  • Knows the safety rules.

2. UNDERSTANDING: 

  • Comprehending the meaning, translation, interpolation, and interpretation of instructions and problems. 

Keywords:

comprehends, converts, defends, distinguishes, estimates, explains, extends, generalizes, gives an example, infers, interprets, paraphrases, predicts, rewrites, summarizes, translates.

Examples:

  • Rewrites the principles of test writing. 
  • Explain in one's own words the steps for performing a complex task. 

3. APPLYING: 

  • Use a concept in a new situation or unprompted use of an abstraction. Applies what was learned in the classroom into novel situations in the workplace.

Keywords:

applies, changes, computes, constructs, demonstrates, discovers, manipulates, modifies, operates, predicts, prepares, produces, relates, shows, solves, uses.

Examples:

  • Use a manual to calculate an employee's vacation time. 
  • Apply laws of statistics to evaluate the reliability of a written test.

4. ANALYZING: 

  • Separates material or concepts into component parts so that its organizational structure may be understood. Distinguishes between facts and inferences.

Keywords:

analyzes, breaks down, compares, contrasts, diagrams, deconstructs, differentiates, discriminates, distinguishes, identifies, illustrates, infers, outlines, 

Examples:

  • Troubleshoot a piece of equipment by using logical deduction. 
  • Recognize logical fallacies in reasoning. 

5. EVALUATING: 

  • Make judgments about the value of ideas or materials.

Keywords:

appraises, compares, concludes, contrasts, criticizes, critiques, defends, describes, discriminates, evaluates, explains, interprets, justifies, relates, summarizes, supports.

Examples:

  • Select the most effective solution. 
  • Hire the most qualified candidate. 
  • Explain and justify a new budget.

6. CREATING: 

  • Builds a structure or pattern from diverse elements. Put parts together to form a whole, with emphasis on creating a new meaning or structure.

Keywords:

categorizes, combines, compiles, composes, creates, devises, designs, explains, generates, modifies, organizes, plans, rearranges, reconstructs, relates, reorganizes, revises, rewrites, summarizes, tells, writes.

Examples:

  • Write a company operations or process manual. 
  • Design a machine to perform a specific task. Integrates training from several sources to solve a problem. 








Wednesday, January 2, 2019

Refresher Course for Licensure Examinations for Teachers: Assessment of Learning

 I.  BASIC CONCEPTS IN ASSESSMENT

MEASUREMENT

  • The use of tools or instruments to provide a quantitative description of the progress of students toward desirable educational goals
  • A process of quantifying the degree to which someone/something possess a given trait

TEST OR TESTING

  • A systematic procedure to determine the presence or absence of certain characteristics or qualities in a learner
  • An instrument designed to measure any characteristics, quality, ability, knowledge or skills.
  • Comprised of items in the area it is designed to measure

ASSESSMENT

  • The process of interpreting the measurement data
  • A process of gathering and organizing quantitative or qualitative data into an interpretable form to have a basis for judgment or decision making

EVALUATION

  • Implies that measurement and assessment of an educational characteristic had been done and that it is now desired to pass a value judgment on the educational outcome by comparing results with predetermined criteria or objectives.
  • Provides a tool for determining the extent to which educational process or program is effective and indicates directions for remediating processes of the curriculum that do not contribute to successful student performance


Types of Measurement

a) Objective measurement

  • More stable since repeated measurements of the same quantity or quality of interest produce more or less the same outcome
  • Do not depend on the person or individual taking the measurements

b) Subjective measurement

  • Are used in certain facets of the quantity or quality of interest that cannot be successfully captured by objective measures
  • Often differ from one assessor to the next even if the same quantity or quality is being measured


Forms/Kinds of Assessment

1. Traditional Assessment

  • Refers the use of paper-and-pencil test

2. Alternative Assessment

  • Refers to the use of methods other than pen-and-paper objective test which includes performance test, projects, portfolios, journals and the like.

3. Authentic Assessment

  • Refers to the use of assessment methods that simulate true-to-life situations
  • This could be objective tests that reflect real life situations or alternative methods that are parallel to what we experience in real life


Roles of Assessment

a) Diagnostic

  • Determines the gaps in learning or learning processes and hopefully to be able to bridge these gaps
  • Used to determine students’ recurring or persistent difficulties
  • Searches for the underlying causes of student’s learning problems that do not respond to first aid treatment
  • Helps formulate a plan for detailed remedial instruction

b) Formative 

  • Allows the teacher to redirect and refocus the course of teaching a subject matter
  • Teachers continuously monitor the student’s level of attainment of the learning objectives
  • The results are communicated clearly and promptly to the students for them to know their strengths and weaknesses and the progress of their learning

c) Summative
  • Determines the extent to which the learning objectives are met and why
  • Used to certify what students know and can do and  the level of their proficiency or competency
  • Reveals whether or not instructions have successfully achieved the curriculum outcomes
  • Paves the way for educational reforms

d) Placement

  • Determines the appropriate placement of a student both in terms of achievement and aptitude
  • Assess the needs of the learners to have basis in planning for relevant instruction
  • Used to know what the students are bringing into the learning situation and use this as a starting point for instruction