Developing Assessment Instruments
In this section: How do we develop instruments for assessment?
These are the two types of test that the instructional designer will make most use of:
- The Pre-Test
- The Post-Test
The Pre-test will assess what prior learning and knowledge the learner possesses. This will also will measure the skills that have been identified as critical to beginning instruction. The Post-Test will assess all of the objectives being taught.
At least one test item needs to be written by the designer for each objective. For each of these items, there must be specific and clear identification of what mastery level looks like. It will be made explicit how mastery will be attained: by observation of the learner and their level of performance or by statistical means.
The opportunity to succeed must be provided to the learner through test items. This must also take into consideration the educational and social context in which the skill is to be performed. The more realistic the testing conditions, the better the student’s performance.
In this section we need to be aware of three components that educational psychologists have identified that facilitate mastery:
- Prerequisite skills
- Practice and feedback
The teaching methods we use to teach these skills must be take int consideration the learning context, domain of the skills taught, the practicality of the situation and what learning theory would work best in these conditions. Other factors worth considering are cost, availability, flexibility, durability, convenience and the ability of the designer to produce the media needed.
Also needed here is an instructional strategy. This will typically include a description of the components of a set of instructional materials as well as the procedures that will be used with the learning materials. There are five major components to consider:
- Pre-instructional activities
- Information presentation
- Learner participation
- Follow-through activities
Instructional strategies will generally be used in four different ways. They can be used as a prescription to develop instructional materials; as a set of criteria through which we can evaluate existing materials; as a framework from which material and strategies can be planned; and finally, as a combination of these methods.
Developing an Instructional Strategy
In this section: We look at the five major components to an instructional strategy.
Developing Instructional Materials
In this section we look at the role of the instructor in developing learning material and how this reflects individualised learning.
There are three main approaches that the instructor can be involved at, but this mostly depends on the instructor’s role.
- The first is where the instructor takes a passive role by developing individualised learning material that covers all five stages of the instruction.
- The second is where the instructor is able to select and adapt existing resources in combination with a hands-on instructional strategy approach.
- The third is when the instructor actively delivers all the instruction.
The overall effectiveness diminishes progressively until stage three as the cumulative effect of the inability of the instructor to present the information and facilitate individual learning is too great to be effective.
After the strategy has been selected, the appropriate materials must be selected and evaluated. This must be done in consideration of the following criteria:
- The material addresses motivational concerns;
- The material includes appropriate content;
- The content is sequenced appropriately;
- All the required information is available;
- The material includes practice exercises;
- The material provides adequate feedback;
- The material contains appropriate test items;
- The material includes adequate follow-up directions;
- The material provides adequate learner guidance;
- The material provides support for memorisation and transfer.
There are three stages to formative feedback:
- One-to-One evaluation. This is used to direct learner performance towards mastery and remove errors. The procedure for the first stage is to introduce the learners to the procedure and its purpose, have the learners complete the instruction and the tests and interview the learners about the resource materials.
- Small group evaluation. This is conducted in order to review the effectiveness of the changes made after the first evaluative stage and also to determine whether the learners can use the learning material without an instructor. The procedure for this second stage is to explain the purpose of the evaluation and then administer the instruction according to the curriculum plan.
- Live trial. This is conducted to determine if the changes made were effective and if the instruction can be used within the planned context. The procedures for this final stage include the selection of the delivery location, the intended learners and the administration of learning by the instructor. Data collection will focus on environmental factors as well as the instruction itself.
Where formative evaluation make seem like research, it is on a practical level, very different. Research will usually be applied to several applications. Evaluations are conducted for specific purposes or outcomes. The data collected by evaluation is intended to be applied for revision of a specific unit of instruction.
Designing and Conducting Formative Evaluations
In this section we look at how we ensure the effectiveness and efficiency of instruction through formative evaluations.
In this section we look at the revision of material after the formative evaluation.
The earlier section of the individual and group evaluations will now both provide data for the revision of the learning material. Data from the individual evaluations will usually be used to solve more obvious issues, the group evaluations will provide more significant data to the target population.
There are several summaries of data that can be used for data analysis for the small groups:
- Group’s Item-by-Objective Performance – this creates two important summaries for analysis: item quality and learner performance. Item analysis determines the difficulty of each item for the group, the difficulty of each objective for the group, and the consistency of the objectives’ test items.
- Learner’s Item-by-Objective Performance – this is similar to the group analysis and should be conducted in the same way after faulty items are removed following the group analysis.
- Learner’s Performance Across Tests – this is accomplished by analysing the pre- and post- test questions for each objective for each student. This analysis allows the instructor to identify whether learning has occurred for each objective.
- Graphing Learner’s Performances – this is an alternative method for data display of performance percentages and objectives allow for comparison of pre- and post- tests.
- Other types of data – other types of data to be analysed can include information gathered from questionnaires, open-ended responses and student comments during the instruction. Use of this data may be anecdotal, informal or has been known to have been written on a copy of the instruction materials.
Once data has been collected, it should be used in the following order:
- Entry behaviours required;
- Pretests and Post-tests used;
- Instructional strategy employed;
- Learning time required;
- Instructional procedures used;
- Revision process.
Dick, W., Carey, L., Carey, J.O., 2014. The Systematic Design of Instruction. 8th ed. London: Pearson.
- Piskurich, G.M., 2006. Rapid Instructional Design. 2nd ed. San Francisco: Wiley