Ch+3+discussion+questions

YOU DO NOT HAVE TO ANSWER QUESTIONS - JUST READ AND REVIEW THEM.

Chapter 3 Selection of Assessment Tools Assessment is an information-gathering activity. Its purpose is to provide answers to important educational questions, whether these concern identification and placement, instructional planning, or monitoring of student progress and program effectiveness. The assessment process begins with careful planning, and one of the most critical preparatory steps is selection of appropriate tools. Both legal and professional guidelines influence selection criteria. Among the important concerns are whether the assessment tool (a) fits the purpose of assessment, (b) is appropriate for the student to be studied and the professional who will use the tool, and (c) is both technically adequate and an efficient method of data collection. Evaluation of technical quality includes consideration of the suitability of reference group with which the student is to be compared and the psychometric characteristics of instrument such as reliability, validity, and measurement error. Also important are the types of test scores and other results available from the measure under study. In addition, professionals must make every effort to avoid bias in all stages of the assessment process. I. Council of Exceptional Children’s (CEC) Professional Standards Related to Assessment A. All special educators should possess a common core of knowledge and skills related to assessment 1. Basic terminology used in assessment 2. Legal provisions and ethical principles regarding assessment of individuals 3. Screening, pre-referral, referral, and classification procedures 4. Use and limitations of assessment instruments 5. National, state or provincial, and local accommodations and modifications B. In addition, there are specific competencies related to each specialty area 1. Learning disabilities 2. Emotional/behavioral disorders 3. Mental retardation/Developmental disabilities II. Criteria for the selection of assessment tools A. Legal guidelines for assessment 1. Federal and state laws provide guidelines for evaluation of students with disabilities a. Assessment is nondiscriminatory b. Assessment focuses on educational needs c. Assessment is comprehensive and multidisciplinary d. Assessment tools are technically adequate and are administered by trained professionals e. Rights of the parents and students are protected B. Professional guidelines 1. //Standards for Educational and Psychological Testing// (1985) 2. Test publishers' catalogues and manuals 3. //Mental Measurements Yearbooks// series 4. //Tests in Print// 5. //Test Reviews Online// ([|www.unl.edu/buros]) 6. //TestLink// ([|www.ets.org]) 7. //Tests// b. Professional journals provide critical reviews of new tests C. Evaluation criteria 1. The tool must fit the purpose of assessment 2. The assessment instrument must be appropriate for the student 3. The assessment instrument should match the skills of the professional using it 4. The assessment tool must be technically adequate 5. The assessment instrument should be efficient II. Evaluating Technical Quality A. Measurement terminology 1. Measurement scales may be nominal, ordinal, interval, or ratio 2. Descriptive statistics include measures of central tendency, measures of variability, and correlation B. Test norms and other standards of comparison 1. Norm-referenced tests a. Age, grade, and gender of norm group members should match characteristics of the students assessed b. Random selection is preferable c. Norm group should be representative of the population d. Norm group should be of adequate size e. Test norms should be recent to reflect current standards 2. Criterion-referenced tests a. Determines whether students have mastered specific skills b. Content should match student’s current skill repertoire or school curriculum C. Reliability 1. Test-retest reliability refers to consistency of measure from one administration to the next 2. Split-half reliability is concerned with internal consistency 3. Interrater or interobserver reliability is concerned with consistency among evaluators D. Validity 1. Content validity is the extent to which the instrument represents the content of interest 2. Criterion-related validity relies on an outside criterion and may examine predictive validity or concurrent validity 3. Construct validity examines the instrument and the theoretical construct it intends to measure E. Measurement error 1. Standard error of measurement is a statistic used to quantify measurement error. 2. Measurement error is related to the variability of scores and reliabillity III. Test Scores and Other Assessment Results A. Results of informal measures are descriptive and straightforward 1. Frequency counts and percentages are interval data 2. Criterion-referenced tests may yield nominal data 3. Rating scales provide ordinal data as do age- and grade-referenced measures B. Norm-referenced tests yield raw scores which are converted to derived scores based on the performance of the norm group 1. Age and grade equivalents are ordinal scores derived from the performance of the norm group 2. Percentile rank scores indicate the percentage of individuals within the norm group who earned an equal or lower raw score 3. Standard scores transform raw scores to a new scale with a set mean and standard deviation 4. Normal curve equivalent scores are normalized standard scores 5. Stanines represent a range of performance rather than a specific score IV. Promoting Nonbiased Assessment A. Issues in assessment of culturally and linguistically diverse students 1. For students whose primary language is not English, tests and test directions should be translated into the home language 2. Interpreters may be used in the assessment of students who speak languages other than English 3. Culture-free and culture-fair measures attempt to minimize bias 4. Culture-specific measures relate directly to specific cultures 5. Separate norms may be provided for students from diverse groups 6. Test administration procedures may be modified 7. Dynamic assessment can be used to study the student’s learning ability 8. Standardized tests can be replaced with informal procedures B. Guidelines for the selection of nondiscriminatory assessment tools 1. Norm groups should be representative of the race, culture, and gender of the student to be assessed 2. Tests containing items that reflect cultural bias should be avoided 3. The tools for assessment should minimize the effects of the disability  1. Describe at least three information sources that can provide guidance in the selection of assessment tools. 2. Discuss the concepts of variability and reliability as they relate to measurement error. 3. Compare and contrast three types of derived scores: percentile ranks, standard scores, and stanines. Tell how these scores are related to one another and to the normal distribution. A sketch may be included. 4. Refer to the CEC’s Professional Standards for Special Education Professionals (Table 3–1). According to the standards, all special educators should posses a common core of knowledge and skills related to assessment. Why is this important and why are there differences in knowledge and skills related to differing areas of disability? 5. CEC Standard 8 lists the knowledge areas special education professionals should possess. One area is to understand the limitations of assessment instruments. Discuss limitations of norm-referenced as well as curriculum-based assessment instruments.
 * OVERVIEW **
 * OUTLINE **
 * DISCUSSION QUESTIONS **