Lorraine

===  __Discussion Questions:__ __1. Discuss the importance of using both informal and norm-referenced reading assessments when gathering information to plan special education programs.__ __Although norm referenced information can be useful in determining which reading skills are areas of need, criterion referenced information and other types of informal data provide specific descriptions of the students current status in skill development. These assessments are not only for determining program eligibility, but also for planning instruction.__ __2. IRI's typically provide three reading levels. Describe these levels and the importance of each.__ __IRI's: Independent, Instructional and Frustrational__  __- Independent level is the level of graded reading materials that can be read easily with a high degree of comprehension and few errors in decoding. At this level, the student reads independently, without instruction or assistance from the teacher.__ __- Instructional Level are more difficult; this is the level appropriate for reading instruction.__  __- Materials at the Frustrational Level are two difficult for the students; decoding errors are too frequent, and comprehension too poor for instruction to occur.__  __The purpose of these measures is to provide information about the student's reading skills in relation to the grade level of the general school curriculum. The results obtained from the IRI's are grade level scores.__ __3. Select five informal reading assessment strategies. Describe each strategy and compare and contrast their purposes.__ __- Teacher Checklists__ __There are quick and efficient means of gathering information from teachers and other professionals about their observations and perceptions of students reading skills. The reading behaviors can be decoding, comprehension, oral reading, silent reading or a combination.__ __- Error & Miscue Analysis__ __Indicates common types of oral reading errors. This strategy provides information about how the student is processing the text and suggest directions for instructional interventions. Error analysis is generally used to investigate decoding mistakes in oral reading.__ __- The Cloze Procedure__ __This procedure determines whether a particular textbook or other reading material is within a student's instructional reading level. The teacher selects a passage with approx. 250 words. The first and last sentence are intact. Within the passage every 5th word is deleted and replaced with a blank. The student is supposed to inject the correct word. This procedure is useful to assess comprehension as well.__ __- Questions and Interviews__ __These gather information about student's views and opinions, their attitudes toward reading and perceptions of the reading process, opinions of their own reading abilities, likes and dislikes in reading materials. Interviews are preferred for younger students and those with poor reading skills.__ __- Portfolio Assessment__ __It is a written log a student might record their reactions to the books and stories they have read.__ __Several types of information should be inside their portfolios to document their progress in reading.__  __o Results of standardized tests and informal assessments__ __o Student self assessments__ __o Samples of the types of materials read throughout the year as part of classroom instruction, and__ __o Information about participation in a voluntary leisure reading program.__ __4. Describe three approaches to reading instruction - bottom up, top down and interactive.__ __Discuss the pros and cons of each approach.__ __There are three divergent models of proficient reading have been proposed. These models differ in the amount of important they attach to text and meaning, two aspects of the reading process.__ __In the bottom up model, it is hypothesized that proficient readers proceed from the test to meaning; first individual letters and words are perceived and decoded, and then comprehension of the text's meaning takes place.__  __In contrast, the top down model emphasizes what are considered the higher level processes of comprehension: prior knowledge, previous experience, questioning and hypotheses testing and comprehension of the meaning of textual material rather than decoding of individual text elements.__  __The third model, Interactive model, which emphasizes both text and meaning. In this model, reading is viewed as "an interactive process" where the reader strategically shifts between the text and what he/she already knows to construct his/her response.__ __The pros: each model covers specific reading strategies but the con to the three models is alone they are lacking importance. Integrating the three models with other language arts (speaking, listening and writing) is a significantly better approach to the instructional reading process as a whole.__ __4. Explain the importance of phonemic awareness and phonological processing to the reading process.__ __Phonemic awareness is an important readiness skills for the acquisition of beginning reading skills. It is the ability to recognize that the words we hear are composed of individual sounds within the word. Those individual sounds are called phonemes (phonemic awareness).__ __Phonological processing is used to describe more complex operations with phonemes such as discrimination among phonemes, rhyming, sequencing and recall.__   __ Thank you so much for sharing your experiences with everyone! Great job!__ Lorraine Perillo Observation Hours: 6.5 hours SPEN 303Z Collaboration and Assessment March 27, 2010 Page 1 of 2 On Saturday, March 27, 2010, I attended a Mini Conference entitled Perspectives on Inclusion featuring the Keynote Speaker, Mrs. Kathie Snow and it was sponsored by PEAC (Pennsylvania’s Education for All Coalition, Inc.- which I am a member) and Holy Family University from 9:00am until 3:30pm. The Mini Conference concentrated on collaborating to break down barriers to inclusion and communicating to be able to collaborate. In addition, several round table discussions were set up for question and answer sessions from the professionals (principals, teachers, professors, therapists) on inclusion called Walking in Their Shoes. The mini conference was open to all families, parents, educators, university students and individuals with disabilities. It was very informative and attendees were able to hear people with disabilities speak out about their trials and tribulations of being disabled and how they are treated in the ‘normal’ environment that we take for granted. There was sharing of stories and strategies. Parents spoke about their disabled child and therapists spoke about their interactions with families who have children with disabilities. One young man, Ben, attended the conference and he spoke out against the treatment he gets from therapists who are supposed to help him. I was taught the term “Learned Helplessness”, we need to support not smother people with disability. We should help them not enable them to be helpless. It is important for me to change my behavior towards people with disabilities (and people without) since all people have idiosyncracies that we might not be able to tolerate, but can learn to live with by changing our ways. . Kathie Snow shared some of her stories of raising her son Benjamin who lives with MS and paralyzed and wheel chair bound. I was not aware of the behaviors of doctors and therapists who are so focused on medical/physical treatments that they miss the human condition of their patient/client. She was told by her son’s doctors and many different therapists that her son would not be able to do anything but just whither away. Her son, with the love, support and the dedication of his parents, he is a college graduate, involved in politics, and with assistance reads and writes and now wants to drive. His life is not dismal, but complete by his standards, and he is a shining star in his parents eyes, as he should be. Lorraine Perillo SPEN 303Z Collaboration & Assessment March 27, 2010 I was taught how to communicate with school officials regarding IEP’s. Mrs. Snow stated, “generate synergy with style, substance and words.” To get your questions/concerns addressed, you pose your questions in non-threatening ways. Mrs. Snow also quoted a gentlemen named Dr. Wendell Johnson (4/16/06-8/29/65) he was an American Psychologist, Speech Pathologist, Author and General Semanticist. He must have been very profound in his time that his words continue on today. She quoted his phrases such as “Our language does our thinking for us” and “That the quality of the answer depends on the quality of the question.” It is important to negotiate never dispute when speaking to school officials. For example, say “What would it take for me to ? Ask these three specific questions: 1. What do you mean? (Ask for clarity) 2. How do you know that? (Ask for validity) 3. What next? (What do you do with that information you just received) It is all in the wording, do not be confrontational. Try using phrases such as: It seems to me ..... or In my opinion...... since we are not the experts in their field.. Hopefully, we will get the answer, we want and deserve, if our wording is phrased correctly We want answers and the key is in the asking. The main purpose of attending this mini conference was to learn about the world of disability. What I came away with was far more than I had anticipated. I found that the importance of the phrase, “That the power of language, new attitudes and actions bring all people together - disabled and non-disabled people.” Mrs. Snow said, “One out of five Americans is a person with a disability. A person with a disability is more like people without disabilities than different. There have always been people with disabilities in the world and there always will be. Disability, along with gender, ethnicity, age and other traits, is simply one of many natural characteristics of being human. This principle is embodied in the Developmental Disabilities Act and other Federal laws. It’s time for the warmth of inclusion to shine equally on people with and without disabilities in our society. ===

=== === === ===

=== 1. Define Learning Aptitude: Refers to an individual capacity for altering behavior when presented with new information or experiences. 2. List at least 3 ways the field of assessment has attempted to make the assessment of students from linguistically and culturally diverse backgrounds fair. a) The System of Multicultural Pluralistic Assessment (SOMPA) is a battery of nine measures designed to provide information about the general learning aptitude of children ages 5 to 11 from diverse sociocultural backgrounds and White, African American, and Hispanic ethnic groups. The SOMPA assesses performance from three separate perspectives: Medical, Social System and Pluralistic. - Medical measures evaluate the student's current health status to determine whether pathological conditions are interfering with physiological functioning. - Social System measures determine wether the student is meeting performance expectations for school and social roles. - The Pluralistic perspective of the SOMPA is concerned with whether the student is meeting performance expectations for age and sociocultural group. The Sociocultural Scales assess the student's current environment. Then the WISC-R is rescored to compare the student with his or her sociocultural group rather than with standard norms. The new WISC-R scores are then considered measures of Estimated Learning Potential. 3. Define adaptive behavior: has been defined as "the effectiveness of degree with which individuals meet the standards of personal independence and social responsibility expected for age and cultural group" and, more recently, as "the collection of conceptual, social, and practical skills that have been learned by people in order to function in their everyday lives". 4. Describe the two primary areas of assessment included on most individual test of intellectual performance. In assessment, intellectual functioning is operationalized as performance on standardized tests of intelligence. Such measures claim to assess reasoning abilities, learning skills and problem solving abilities. However, most intelligence tests are essentially measures of **verbal abilities and skills** **in dealing with numbers and other abstract symbols**. Because these skills and abilities are required in school learning, tests of intelligence are best viewed as measures of scholastic, not general aptitude. 5. Provide examples of information parents or other family members can contribute to the assessment of intellectual performance. Parents and other family members can contribute to the assessment of intellectual performance only because their perspective is valuable due to the fact they have lifetime experience with their child in a wide range of activities and environments. For instance, parents may be able to compare one child's rate of learning with siblings or age mates. They may also identify tasks the child learns easily and tasks he or she finds difficult. ===

=== === ===__    10/10 POINTS EARNED CHAPTER 7 1. Define Learning Aptitude: Refers to an individual capacity for altering behavior when presented with new information or experiences. 2. List at least 3 ways the field of assessment has attempted to make the assessment of students from linguistically and culturally diverse backgrounds fair. a) The System of Multicultural Pluralistic Assessment (SOMPA) is a battery of nine measures designed to provide information about the general learning aptitude of children ages 5 to 11 from diverse sociocultural backgrounds and White, African American, and Hispanic ethnic groups. The SOMPA assesses performance from three separate perspectives: Medical, Social System and Pluralistic. - Medical measures evaluate the student's current health status to determine whether pathological conditions are interfering with physiological functioning. - Social System measures determine wether the student is meeting performance expectations for school and social roles. - The Pluralistic perspective of the SOMPA is concerned with whether the student is meeting performance expectations for age and sociocultural group. The Sociocultural Scales assess the student's current environment. Then the WISC-R is rescored to compare the student with his or her sociocultural group rather than with standard norms. The new WISC-R scores are then considered measures of Estimated Learning Potential. 3. Define adaptive behavior: has been defined as "the effectiveness of degree with which individuals meet the standards of personal independence and social responsibility expected for age and cultural group" and, more recently, as "the collection of conceptual, social, and practical skills that have been learned by people in order to function in their everyday lives". 4. Describe the two primary areas of assessment included on most individual test of intellectual performance. In assessment, intellectual functioning is operationalized as performance on standardized tests of intelligence. Such measures claim to assess reasoning abilities, learning skills and problem solving abilities. However, most intelligence tests are essentially measures of **verbal abilities and skills** **in dealing with numbers and other abstract symbols**. Because these skills and abilities are required in school learning, tests of intelligence are best viewed as measures of scholastic, not general aptitude. 5. Provide examples of information parents or other family members can contribute to the assessment of intellectual performance. Parents and other family members can contribute to the assessment of intellectual performance only because their perspective is valuable due to the fact they have lifetime experience with their child in a wide range of activities and environments. For instance, parents may be able to compare one child's rate of learning with siblings or age mates. They may also identify tasks the child learns easily and tasks he or she finds difficult. __===

Really nice job! Grade - A === Norm-Referenced Testing Project - Due 3/12/2010 The Peabody Developmental Motor Scales Test (PDMS) (Birth to 5 years of age)Reliability and validity have been determined empirically. **Purpose:** To determine whether children have delays in motor development and motor control, which support motor learning and systems approaches to evaluations and intervention. As children with disabilities have moved to more inclusive settings, physical therapists have had to place greater emphasis on evaluation of the functional skills that children with disabilities need to participate in general education environments. This norm-referenced test, The Peabody Developmental Motor Scales Test, measures motor skills and is widely used by occupational therapists, physical therapists, diagnosticians, early intervention specialists, adapted physical education teachers, psychologists, and others who are interested in examining the motor abilities of young children. The test is composed of six subsets: 1. Reflexes: this subtest measures a child's ability to automatically react to environmental events. Because reflexes typically become integrated by the time a child is 12 months old, this subtest is only given to children birth through 11 months only. 2. Stationary: This subtest measures a child's ability to sustain control of his or her body within its center of gravity and retain equilibrium. (all ages) 3. Locomotion: This subtest measures a child's ability to move from one place to another. The actions measured include crawling, walking, running, hopping, and jumping forward. (all ages) 4. Object Manipulation: This subtest measures a child's ability to manipulate balls. Examples of the actions measured include catching, throwing, and kicking. Because these skills are not apparent until a child has reached the age of 11 months, this subtest is only given to children ages 12 months and older. 5 Grasping: This subtest measures a child's ability to use his or her hands. It begins with the ability to hold an object with one hand and progresses up to actions involving the controlled use of the fingers of both hands. (all ages) 6 Visual-Motor Integration: This subtest measures a child's ability to use his or her visual perceptual skills to perform complex eye-hand coordination tasks such as reaching and grasping for an object, building with blocks, and copying designs. (all ages) Composites: Gross Motor Quotient: This composite is a combination of the results of the subtests that measure the use of the large muscle systems: Reflexes, Stationary, Locomotion and Object Manipulation. Fine Motor Quotient: This composite is formed by a combination of the results of the subtests that measure the use of the small muscle systems: Grasping, Visual-Motor Integration. Total Motor Quotient: This composite is formed by a combination of the results of the gross and fine motor subtests. Because of this, it is the best estimate of overall motor abilities. **Procedure for Administration:** One-on-one testing is relatively easy to do. The developers of the PDMS recommended its use with children with disabilities, and the PDMS has an adequate number of test items at each age level. It is easily used to identify children with developmental delays. This particular norm-referenced testing requires a standardized format of one-to-one testing by an examiner, so the result may not predict a child's performance when playing with other children. It may be used in a group setting also, using many examiners to help keep track of performances. For example, two teams of children. One team with disabilities and the other without (same age group). A ball throwing game in the gym. Using the PDMS criteria within their subtests. **Reporting Results:** It seems that children with developmental delay perform better in isolated test settings than they do in a structured game with peers. That children with disabilities perform differently in different environments. Motor learning as a skill, if transferred to various environments, the environments in which the skill is practiced must be similar, requiring the same processes. You can not compare testing motor skills one-on-one with a therapist vs. the same skills in the gym with peers. The scores of the Peabody Developmental Motor Scales norm-referenced test can be given to the Administrator of the School, parents, therapists in data analysis form according to the PDMS test manual. **Providing recommendations for remediation:** To determine whether a child has a developmental delay, standardized administration of norm-referenced tests is appropriate. Because norm-referenced tests do not measure motor skills in natural environments, children's disabilities and how they may be affected by societal limitations must be observed in a natural environment. A child's everyday functions in their environments, as well as, observations and talking with teachers and parents are needed to modify the student's services, and are all a part in the process of evaluation for special education services. === === __10/10 points earned! WOW Lorraine PerilloChapter 6Essays2/27/2010__ ===

=== 1. When assessing for school performance problems, what recommendations are given for the use of norm-referenced standardized tests, criterion- referenced tests and curriculum-based measurement? One of the first steps in special education assessment is the administration of an individual norm-referenced test of academic achievement. Tests that survey several areas of the curriculum, usually basic skill subjects are most usual. Because the purpose at this point in the assessment process is to determine whether significantly school performance problems exist, norm referenced measures are appropriate. There has been much debate over the relative merits of norm referenced versus criterion referenced measures in achievement testing, but norm references tests remain the most common strategy for eligibility assessment. Norm references measures provide the comparative information necessary for determining eligibility and they are much more time efficient. Criterion references tests and other informal measures are typically used after eligibility has been established to provide more detailed description of student performance in areas of educational need. 2. Why are testing accommodations allowed for students with special needs in state- and district- wide tests? Name 6 common testing accommodations. === Timing accommodations are changes in the duration of the test. Such accommodations may include: · Extending the time allowed for administration of a test on the scheduled day, by starting early and/or ending late on the same day (the IEP/504 Plan must specify the amount of time to be allotted, such as “double time”). · Changing the way the time is organized by specifying the amount of time a student should work without a break (e.g., a ten-minute break for each 30-minutes of testing). · Administering State assessments over multiple days. (Requires Department approval). Timing accommodations may also be needed in conjunction with a variety of other testing accommodations. For example, a student using special equipment to record responses or dictating responses to a scribe may complete examinations more slowly. Some accommodations such as the use of magnification devices may induce fatigue. Setting accommodations are often needed in conjunction with scheduling accommodations because the test is being administered at a different time. Examples of characteristics, which may indicate the need for flexible scheduling/timing accommodations, include: · slow cognitive processing or work rate. These students may need extended time. · limited attention span and low frustration levels. These students may need frequent breaks. · limited physical stamina. Students with limited physical stamina may need extended time and frequent breaks. Providing additional time may benefit some students but not others, depending on the individual needs of the student. For example, some students may use additional time to second-guess themselves and repeatedly revise their responses to test items. Long periods of test taking may diminish a student’s optimal performance as the student tires and loses concentration. To help determine how much additional time a student may need for tests, the additional time that the student needs for instruction should be considered. In addition, students using Braille or large print to take an assessment may need additional time to complete the test.
 * 1. FLEXIBILITY IN SCHEDULING/TIMING: **

· changes in the //conditions// of the setting, such as special lighting or adaptive furniture, or · changes in the //location// //itself,// accomplished by moving the student to a separate room. Flexibility in setting may be needed in conjunction with other accommodations provided to the student. For example, changing the location of an examination may be needed to effectively provide extended time or use of a scribe.

Types of setting accommodations include the following: · Separate location/room – administer test individually · Separate location/room – administer test in small group (3-5 students) · Provide adaptive or special equipment/furniture (specify type, e.g., study carrel) · Special lighting (specify type, e.g., 75 Watt incandescent light on desk) · Special acoustics (specify manner, e.g., minimal extraneous noises) · Location with minimal distraction (specify type, e.g., minimal visual distraction) · Preferential seating Examples of student characteristics which may indicate the need for flexible setting accommodations include students who have difficulty maintaining attention in a group setting; students who use specialized equipment that may be distracting to others; and students with visual impairments who may need special lighting. Accommodations in method of presentation change the way in which an assessment is presented to a student. These include: · Revised test format* Ø Braille editions of tests Ø Large type editions of tests Ø Increased spacing between test items Ø Increased size of answer blocks/bubbles Ø Reduce number of test items per page Ø Multiple-choice items in vertical format with answer bubble to right of response choices Ø Presentation of reading passages with one complete sentence per line (this is not always possible with large type) · Revised test directions Ø Directions read to student Ø Directions reread for each page of questions Ø Language in directions simplified Ø Verbs in directions underlined or highlighted Ø Cues (e.g., arrows and stop signs) on answer form Ø Additional examples provided
 * 3. METHOD OF PRESENTATION**

Revision of test directions is an accommodation that is limited to oral or written instructions provided to all students that explain where and how responses must be recorded; how to proceed in taking the test upon completion of sections; and what steps are required upon completion of the examination. The term “test directions” never refers to any part of a question or passage that appears on a State assessment. - Use of aids or assistive technology devices Ø Audio tape Ø Computer (including talking word processor) Ø Listening section repeated more than the standard number of times Ø Listening section signed Ø Listening section signed more than the standard number of times Ø Masks or markers to maintain place Ø Papers secured to work area with tape/magnets Ø Test passages, questions, items and multiple-choice responses read to student Ø Test passages, questions, items and multiple-choice responses signed to student Ø Visual magnification devices (specify type) Ø Auditory amplification devices (specify type, e.g., FM system) For some students with disabilities, the standard location for test administration may not be appropriate. Setting accommodations are changes in the location in which an assessment is administered. This can include: Accommodations in method of response are changes in the way students respond to an assessment. Similar to methods of presentation, these include: · Revised response format such as allowing marking of answers in booklet rather than answer sheet; · Use of additional paper for math calculations; · Use of Aids/Assistive Technology § Amanuensis (Scribe) § Tape Recorder § Word processor § Computer (School must ensure that students do not have access to any programs, dictionaries, thesaurus, internet etc. that may give them access to information or communication with others). Examples of characteristics which may indicate the need for accommodations in the method of test response include: · physical disabilities that limit their ability to write in the standard manner. Students with physical disabilities may need to dictate their responses to a scribe. · difficulty tracking from the test booklet to the answer sheet. These students may need to write directly in the test booklet. · attention difficulties. Students with attention difficulties may need to write directly in the test booklet. There may be other accommodations considered that are not included in the previous categories. Some students may have a disability which affects their ability to maintain attention on the test. These students need physical or verbal prompts to stay on task and remain focused. Some students may have a disability which affects their ability to spell and punctuate and may require the use of spell or grammar checking devices. Some students have the reasoning capability to complete narrative mathematics problems and involved computations, but may have visual or motor impairments which make them unable to use paper and pencil to solve computations. Some students with disabilities are unable to memorize arithmetic facts but can solve difficult word problems. Except as specifically prohibited on the Grades 3-8 Mathematics tests, these students may be provided the use of computational aids, such as arithmetic tables or calculators. Only those students whose disability affects their ability to either memorize or compute basic mathematical facts should be allowed to use computational aids. To meet the needs of these students, the following additional accommodations may be considered (except as specifically prohibited on the Grades 3-8 Mathematics tests): · On-task focusing prompts · Waiving spelling requirements · Waiving paragraphing requirements · Waiving punctuation requirements · Use of calculator · Use of abacus · Use of arithmetic tables · Use of spell-check device* · Use of grammar-check device  The learning standards for physical education apply to all students and students with disabilities must be included in these assessments. Due to the unique nature of physical education, the accommodations that may be provided to enable students with disabilities to participate in physical education assessments are also unique. Accommodations can include changes in equipment, environment and/or the basic rules. The following are suggestions for physical education instructional and assessment accommodations for students with disabilities: · Reduce the size of the playing area · Reduce the number of participants · Reduce the time of the task · Varied size, weight, color of equipment · Use of brightly colored paint to identify field markings · Use of cones or markers to indicate field markings · Field markings may be modified in width · Use of a beeper ball and/or a localizer to identify bases · Use of hand signals or teammate shoulder tap to start and stop play · Allow use of alternative communication methods (e.g., interpreter, picture board, flash cards, etc.) by student · Select the court environment with the least noise · Increase the size of the playing area to allow the student more personal space and less likelihood of contact · Provide verbal cues · Provide pinch runner for games requiring running
 * 4. METHOD OF RESPONSE**
 * 5. OTHER ACCOMMODATIONS**
 * 6. ACCOMMODATIONS FOR PHYSICAL EDUCATION ASSESSMENTS**

3. Besides administering individual achievement tests to students, what are other ways to establish a student’s school performance strengths and challenges? The criteria that educators utilize when assessing student performance are a significant part of assessment. For instance, cultural and linguistic factors play a role. For students from different backgrounds, certain criteria may not make sense and may make it difficult for them to know how to improve their work. Useful criteria make learning targets clear for all of us (students,teachers, and parents) assuring that the first key to good assessment is in place. For all students, the keys to good assessment should be used and an emphasis on clear and appropriate learning targets and clear and appropriate purposes should result in clear and appropriate criteria. Criteria are clearest when students are involved in developing, trying out, and refining them. One strategy for involving students in developing criteria as well as for making meaning of criteria is to help them see criteria in the world around them. There are four simple steps teachers can go through with students to help them influence and understand how their work will be assessed (adapted from Gregory, Cameron, & Davies, 1997).

1. Students should brainstorm all the possible attributes of any given assignment. 2. Students should sort the subsequent lists of attributes into like responses, then create and label categories. The class should discuss whether some of the things on the list are really personal preferences rather than required components. Each category may become a trait. 3. For each trait, the class should brainstorm what constitutes high, medium, and low performance on the trait. This forms a rubric made of criteria that can be used to score student performance, indicating strengths and areas for improvement. 4. The class should try out the criteria with work samples, and then add, revise, and refine as needed. Even when there is an existing scoring guide or rubric, time spent helping ALL students understand criteria will pay off in improved performance. 4. When choosing an academic achievement test to determine school performance problems, what features of the test would you look at to determine its appropriateness for your student? The major reason for using a norm-referenced tests (NRT) is to classify students. NRTs are designed to highlight achievement differences between and among students to produce a dependable rank order of students across a continuum of achievement from high achievers to low achievers (Stiggins, 1994). School systems might want to classify students in this way so that they can be properly placed in remedial or gifted programs. These types of tests are also used to help teachers select students for different ability level reading or mathematics instructional groups.

=== __10/10 points earned! You out did yourself this time!__===

=== Chapter #5 Essay Questions (**Validity:** the agreement between a test score and the quality it is believed to measure. In other words, it measures the gap between what a test actually measures and what it is intended to measure.) This gap can be caused by two particular circumstances: a) the design of the test is insufficient for the intended purpose and b) the test is used in a context or fashion which was not intended in the design). 1. Define and explain each of the following types of validity: **Criterion-related**, **Concurrent and Construct.** **Criterion-related** validity evidence involves the correlation between the test and a criterion variable (or variables taken as representative of the construct. In other words, it compares the test with other measures or outcomes (the criteria) already held to be valid. For example, IQ tests are often validated against measures of academic performance (the criterion). If the test data and criterion data are collected at the same time, this is referred to as concurrent validity evidence. If the test data is collected first in order to predict criterion data collected at a later point in time, then this is referred to as predictive validity evidence. **Concurrent** validity refers to the degree to which the test correlates with other measures of the same construct that are measured at the same time. Returning to the selection test example, this would mean that the tests are administered to current students and then correlated with their scores on performance reviews. It is good diagnostic screening tests when you want to diagnose. **Construct** validity refers to the extent to which test of a construct (e.g. practical tests developed from a theory) do actually measure what the theory says they do. For example, to what extent is an IQ questionnaire actually measuring "intelligence"? There are two approaches to construct validity - sometimes referred to as "convergent validity" and divergent validity" (also knows as discriminant validity). 2. Discuss the difference between **Convergent validity** and **Discriminant validity.** A test has **convergent validity** if it has a high correlation with another test that measures the same construct. By contract, a test's **discriminant validity** is demonstrated through a low correlation with a test that measures a different construct. Note this is the only case when a low correlation coefficient (between two tests that measure different traits) provides evidence of high validity. The basic difference between convergent validity and discriminant validity is that convergent validity tests whether constructs that should be related are related. Discriminant validity tests whether believed unrelated constructs are in fact, unrelated. 3. There are two types of criterion-related validity. Define each and explain the difference between them. Criterion-related validity - the degree to which performance on an assessment proedure predicts (or is statistically related to) an important criterion such as performance, training success or productivity. **Concurrent validity** - is measured by comparing two tests done at the same time, for example a written test and a hands on exercise that seek to assess the same criterion. This can be used to limit criterion errors. **Predictive validity** - in contrast, compares success in the test with actual success in the future. The test is then adjusted over time to improve its validity. 4. There are four factors that can affect the validity of our assessing measure. Discuss each of these our factors. Characteristics include: 1 a test of maximum performance (e.g. achievement test) which tells us what a person can do. 2. a test of typical performance (e.g. personality test) which tells us what a personal usually does. 3. a speed test, in which response rate is assessed. 4. a mastery test asses whether or not the person can attain a pre-specified mastery level of performance. Response sets include: social desirability (giving responses that are perceived to be socially acceptable), acquiescence (agreeing or disagreeing with everything, and deviation (giving unusual or uncommon responses). All of the above can threaten the validity of a given set of results. (**Reliability:** if a test is unreliable, then although the results of one use may actually be valid, for another they may be invalid. Reliability is thus a measure of how much you can trust the results of a test. Tests often have high reliability - but at the expense of validity. In other words, you can get the same result, time after time, but it does not tell you really want to know). 5. Define and explain each of the following types of reliability. **Test-Retest Reliability**, **Split-half testing**, **Inter-Rater Reliability** and **Alternate-forms.** **Test-Retest Reliability:** An assessment or test of a person should give the same results whenever you apply the test. Test-Retest reliability evaluates reliability across time. Example: In the development of a national school tests, a class of children are given several tests that are intended to assess the same abilities. A week and a month later, they are given the same tests. With allowances for learning, the variation in the test and retest results are used to assess which tests have better test-retest reliability. **Split-half testing:** Consistency is a measure of reliability through similarity within the test, with individual questions giving predictable answers every time. Consistency can be measured with **split-half testing**. **Split-half testing** measures consistency by: - Dividing the test into two (usually a mid-point, odd/even numbers, random or other method) - Administering them as separate tests. - Compare the results from each half.**Split-half testing** is better done with tests that are rather long. **Inter-Rater Reliability:** When multiple people are giving assessments of some kind or are the subjects of some test, then similar people should lead to the same resulting scores. It can be used to calibrate people, for example those being used as observers in an experiment. **Inter-Rater Reliability** thus evaluates reliability across different people. Two major ways in which **Inter-Rater Reliability** is used are (a) testing how similarly people categorize items, and b) how similarly people score items. This is the best way to assessing reliability when you are using observation, as observer bias very easily creeps in. It does however, assume you have multiple observers, which is not always the case. **Inter-Rater Reliability is also known as inter-observer reliability or inter-coder reliability.** Examples: Two people may be asked to categorize pictures of animals as being dogs or cats. A perfectly reliable result would be that they both classify the same pictures in the same way. Observers being used in assessing prisoner stress are asked to assess several 'dummy' people who are briefed to respond in a programmed and consistent way. The variation in results from a standard gives a measure of their reliability. In a test scenario, an IQ test applied to several people with a true score of 120 should result in a score of 120 for everyone. In practice, there will be usually be some variation between people. **Alternate forms are Parallel-forms Reliability and Internal Consistency Reliability:** **Parallel-forms Reliability:** One problem with questions or assessments is knowing what questions are the best ones to ask. A way of discovering this is to do two tests in parallel, using different questions. Parallel-forms reliability evaluates different questions and question sets that seek to assess the same construct. Parallel-forms evaluation may be done in combination with other methods, such as Split-half, which divided items that measure the same construct into two tests and applies them to the same group of people. Example: An experimenter develops a large set of questions. They split these into two and administer them each to a randomly selected half of a target sample. **Internal Consistency Reliability:** When asking questions in research,the purpose is to assess the response against a given construct or idea. Different questions that test the same construct should give consistent results. **Internal consistency reliability** evaluates individual questions in comparison with one another for their ability to give consistently appropriate results. 6. What is the Standard Error of Measurement? Why is it important? In your answer, be sure to discuss the terms obtained score, true score and error score. The Standard Error of Measurement (SEM) is an estimate of error to use in interpreting an individual's test score. A test score is an estimate of a person's 'true' test performance. Using a reliability coefficient and the test's standard deviation, we can calculate this value. It is important because the higher a tests reliability coefficient, the small there tests Standard Error of Measurement. The larger the Standard Error of Measurement the less reliable the test is. The estimate of the error score associated with pupils obtained score when compared with their hypothetical true score. (See Standard Error of Measurement Example Attached w/Hard Copy) 7. There are four factors that can affect the reliability of an assessment measure. Discuss each of these four factors. Any factor which reduces score variability or increases measurement error will also reduce the reliability coefficient. For example, all other things being equal short tests are less reliable than long ones, very easy and very difficult tests are less reliable than moderately difficult tests, and tests where examiners scores are affected by guessing (e.g. true-false) have lowered reliability coefficients. ===

=== ===

AMENDMENT TO CBA
To assess their comprehension of the story, I have put together various skill sheets for them to fill out upon completion of the story book. I can assess them using my observation skills when they are playing their sequencing game, see and hear how they are brainstorming the Venn Diagram, and have some concrete pen on paper documentation from them using their word list and the matching picture-to-word project. Lorraine Perillo SPEN 303 Z Prof. P. Williams 02/17/2010 Assignment: CBA In Reading - Special Ed. Grades Early Elementary CBA Reading for Special Education Children - Grades: Early Elementary and can also be utilized for ELL students in the same grades. I would use the classic book entitled, “There was an old lady who swallowed a fly”, the author is unknown and the use of puppets to relay the story would be used. There was an old lady who swallowed a fly, I don’t know why she swallowed a fly, perhaps she’ll die. (Insert the fly puppet inside the puppet in the old woman’s stomach) There was an old lady who swallowed a spider, that wiggled and wriggled and jiggled inside her. She swallowed the spider to catch the fly. I don’t know why she swallowed a fly. Perhaps she’ll die. (Insert the spider puppet inside the puppet in the old woman’s stomach) There was an old lady who swallowed a bird. How absurd, to swallow a bird! She swallowed the bird to catch the spider. (Insert the bird puppet inside the puppet in the old woman’s stomach) There was an old lady who swallowed a cat. Well, fancy that, she swallowed a cat! She swallowed the cat to catch the bird. (Insert the cat puppet inside the puppet in the old woman’s stomach) There was an old lady who swallowed a dog. What a hog, she swallowed a dog! She swallowed the dog to catch the cat. (Insert the dog puppet inside the puppet in the old woman’s stomach) There was an old lady who swallowed a cow! She swallowed the cow to catch the dog. (Insert the cow puppet inside the puppet in the old woman’s stomach)

There was an old lady who swallowed a horse! (Insert the horse puppet inside the puppet in the old woman’s stomach) She’s died of course! Lorraine Perillo SPEN 303 Z Page 2 2/17/2010 This is an interactive repetitive story book which comes with puppets for visual and tactile affects. There are enough puppets to engage a small group of students at a time. Working with Special Ed students, I have found it necessary to differentiate as many sensory issues as needed to accommodate the needs of the children. By taking their needs into consideration you keep engaged in the story. They use their hands (small motor skills) to insert the puppets into the old woman’s stomach when the hear (or read) that specific item has been eaten. The pages are colorful and vibrant, for the visually impaired child. For the children with an auditory disability, you can speak directly to the child one-on-one, or allow them to try to read the story independently. The children can make reading connections with this CBA that I have put together using this story book. To assess there comprehension of the story, I have put together various skill sheets for them to fill out upon completion of the story book. 1. Word List with the 8 frequently used words in the story. Lady, fly, spider, bird, cat, dog cow and horse. 2. Simply matching the word to the picture. Drawing a line from the picture to the word. 3. A sequencing game. 8 index cards (Picture and word on one side/opposite side the position that bug/animal came into the story) also the word so they may trace it. 4. As a group or individually they can do a Venn Diagram. Specific questions are on the form. I give them prompts and the results indicate whether or not they are making a connection to the main idea. See if they can recall facts and details of the story. Lorraine Perillo Page 3 SPEN 303 Z 02/17/2010 I am interested in seeing if they recognize cause and effect by skill specific, selected- response questions as stated on their paperwork. Can they make predictions, such as the next rhyming sentence. Can they draw conclusions/making inferences to the story. Comparing and contrasting and most importantly can they summarize the story. Students learn reading skills with focused, flexible instruction and I think this is an appropriate CBA for special education students. The children have background knowledge about these animals, reading this book has real-world/high interest value that they can relate to. Not only can this book, and my CBA, work for special education students, but it can also support ELL students, as well. It provides knowledge activation in special education and ELL students. For instance, you can use graphic organizers to sort out the storyline (i.e. the Venn Diagram). You can turn this book into an entire theme based instruction. I would be interested in assessing their comprehension through the reading of this book. This CBA can be used as a template for other books, and can be given monthly to see if they are progressing cognitively and on target for their benchmarks

__10/10 points earned__ My answers to Chapter #4 2/13/10 1. In standardized testing, test tasks are presented under standard conditions so that the student's performance can be compared to the performance of the **professional responsible for test administration.** **2.** Which of the following statements describes an adequate testing environment? **B - Chairs and a table are available for the student and the tester.****C** - **All of the equipment and test materials needed for the session are placed** **on the test table ready for use.****D -The temperature and ventilation in the room is comfortable for the student.** 3. Which of the following is **not** good practice in introducing the students to the testing situation? **A - Tests should be scheduled at the same time as the students favorite classroom activities.** 4. Preparing a student psychologically for testing is called? **B -** **Establishing rapport.** 5. Suppose a professional began test administration with item number 10 in an attempt to establish a basal of four consecutively numbered correct responses. If the student failed item 10, what test item should be administrated next? **When this occurs the tester must present earlier, easier items. The basal can** **not be established without four consecutive correct responses, The tester proceeds** **backwards until a basal of four consecutively numbered correct items is established.** 6. Give two reasons why it is important to observe the student's behavior during test administration. **1. Provides useful assessment info when substantiated by observation data.** **2. Alert testers to signs of disabilities the student might be displaying.** 10. Choose the statement that best explains the meaning of the score. **B -An age equivalent score of 10-3 indicates that the student's raw score is** **equal to the average raw score of grade 10.3 students in the norm group.** 11. A student received a standard score of 100 on a test in which the mean standard score is 100, the standard deviation is 15, and the scores are normally distributed. This score **B - is within the average range.**

__10/10 points earned__ Lorraine PerilloChapter 31/7/2010 1. One major impetus for the passage of federal special education laws was concern over misuse of standardized tests with culturally and linguistically diverse students. Using Table/Figure 3-2 (Relationships Among Different types of Scores in a Normal Distribution) discuss several inappropriate assessment practices of the past and explain the current legal safeguards to prevent the recurrence of these practices. Teachers are aware that various assessment procedures can be misused or overused in resulting in harmful consequences such as embarrassing students, violating a student's right to confidentiality, and inappropriately using student's standard achievement test scores to measure teaching effectiveness. There have been several inappropriate assessment practices in the past such as assessment issues in culturally and linguistically diverse students that should not be assessed in the same manner as their classmates. Modification of **norm-referenced tests** need to be less formal for this particular group of students. **Criterion-referenced tests** tests evaluate a student's performance without reference to the performance of other students. There are problems with both assessments. Norm-referenced tests do not represent the general population in the group, and criterion-referenced tests are also floored for these students in regards to the wording and content within the assessment. There are current legal safeguards to prevent the recurrence of these practices. "According to the legal guidelines for assessment in federal law, assessment tools used with students with disabilities must be validated for the specific purpose which they are used. Thus, studies of a measure's validity should include individuals with disabilities as subjects if that measure is to be used for special education purposes (Fuchs et al., 1987). 2. On norm-referenced tests, the standard of comparison against which a student's performance is evaluated is the performance of age (or grade) peers in the norm group. Explain the standard of comparison for informal assessment tools such as classroom quizzes, inventories, and criterion-referenced tests. The standard of comparison for informal assessment tools such as classroom quizzes, inventories, and criterion-referenced tests are explained below: Classroom quiz: an informal assessment tool, usually designed by teachers, to assess student's classroom learning. Inventories: an informal assessment device that samples the student's ability to perform selected skills within a curricular sequence. Criterion-referenced tests: are informal assessment device that assesses skill mastery, compares the student's performance to curricular standards. 3. When professionals select a tool for assessment, they consider not only the technical quality of the measurement device, but also the particular purpose for which it will be used. Tell why a technically poor measure is never an appropriate assessment tool. Then give an example of a situation in which a technically adequate measure is inappropriate because it does not fit the purpose of the assessment. A technically poor measure is never an appropriate assessment tool because it should be used to interpret a students strengths and errors. The assessments methods should reflect ways that encourage the student's educational development. Invalid information can affect instructional decisions about students. An example of a situation in which a technically adequate measure is inappropriate because it does not fit the purpose of the assessment is any assessment that does not take into consideration the individual student. Special Needs students, as well as English Language Learners, can not fall into the assessment category of Norm-Referenced Testing in a classroom environment. It would be impossible to give and the end product would be full of inaccuracies. They can not be tested in the same group and expect to get an accurate level of their skill mastery. 4. Grade equivalents are available on many tests, although there are many criticisms of this type of score. Describe the advantages and disadvantages of grade scores, giving your opinion on the International Reading Association's recommendation that grade equivalents be eliminated from standardized tests. Grading students is an important part of professional practice for teachers. Grading defines both the student's level of performance and a teacher's valuing of that performance. With the use of standardized testing, teachers will understand and be able to articulate why the grades they assign are rational, justified, and fair, acknowledging that such grades reflect their preferences and judgements. The International Reading Association's recommendation that grade equivalents be eliminated from standardized tests has some validity too. According to the Association, "One of the most serious misuses of tests in the reliance on a grade equivalent as an indicator of absolute performance, when a grade equivalent should be interpreted as an indicator of a test taker's performance in relation to the performance of other test-takers used to norm the test". Select other measures in conjunction with age and grade equivalents so that it does not lead to misinterpretation such as norm-referenced measures that offer other types of derived scores. 5. Discuss several potential sources of bias in the assessment process, including the selection of inappropriate procedures. Identify five ways in which bias can be introduced into assessment, and discuss how each can be prevented. There are many ways in which bias can be introduced into assessment: - Students evaluated for special education without notice to parents or parental consent - Culturally biased tests used in evaluation - Non-English speaking students addressed in English - Tests used that penalized students with disabilities - Placement in services for students with mental retardation based solely on IQ scores - Assessment focused only on the disability not the educational program. There are three ways in which each potential source of bias can be prevented. One way to take bias seriously, would be to ensure strong and varied representation of culturally, ethnically, linguistically and economically diverse groups in the construction of public tests. Another way, is to make tests available for public examination after they have been given. A third way to offset bias, is to ensure that no single assessment is used to make important educational decisions. (Table 3-2 Legal Safeguards Against Assessment Abuses is attached to this document)

Lorraine PerilloChapter 3True or False 1. False 2. True 3. True 4. False 5. True 6. False 7. True 8. True 9. False 10. False

GOOD JOB GUYS - 10/10 POINTS EARNED FOR CHAPTER 2 === This activity contains 3 questions. 1. Ms. Trapp comes to you, the resource specialist at your school, to consult about William, a student in the second grade. From the information given to you by Ms. Trapp, complete the Prereferral Intervention Checklist on pages 32–33 in your textbook. What other modifications or accommodations could Ms. Trapp try while waiting for the special education assessment to occur? (done in the prereferral) ===

PreReferral Intervention Checklist

Name: William Age : 7  Date : 1/30/10  Teacher : Ms Trapp  Grade : Second Grade 

1. AREAS OF CONCERN: William has alot difficulty in things like mathematics, remembering facts, confusion with computation, following directions and has disruptive behavior. Also William’s homework assignments are rarely accomplished.

===2. WHAT KINDS OF STRATEGIES HAVE BEEN EMPLOYED TO RESOLVE THIS PROBLEM? The strategies that have been employed are that William needs visual clues and prompts, there is a slower pace in the classroom. William is allowed to go to the computer room only if he completes his work in class as a behavior modification techniqiue ===

===A. RECOREDS REVIEW AND CONFERENCE The assessment process as explainedf to Ms. Trapp. ===

Assistive Technology specialists
===B. ENVIRNMENTAL MODIFICATIONS Classroom learning environment needs to be addressed C. INSTRUCTIONAL Visual clues and prompts to facilitate instruction. To have a slower pace in the classroom. Allow computer accessibility only if he completes his homework in class as a behaivor modification technique. D. MANGEMENT Team members are assessing his needs to bring up his level of proficiency and design an appropriate IEP for him. 3. WHAT METHODS ARE CURRENTLY EMPLOYED TO ADDRESS THE CONCERN? Assessments on his strengths and weaknesses in mathematics.===

4. WHERE DOES THIS STUDENT STAND IN RELATIONSIP TO OTHERS IN CLASS, GROUP OR GRADE REGARDING SYSTEMWIDE TESTS, CLASS AVERAGE BHEAVIOR, COMPLETION OF WORK, ETC? His mathematical skills are far below grade level but William scores within the average range in intellectual performance and is doing acceptable in reading and language arts.

STUDENT BEHAVIOR: His behavior is a concern because he does not follow directions, he is often disruptive and his homework and in class work is rarely completed. His behavior problems are quite severe.

CLASS OR GROUP/GROUP/CLASS BEHAVIOR His behavior at home and in school is disruptive and is a major concern. A pattern of verbal outbursts, negative remarks, physical threats to peers and out of seat behavior.

5. IS THE CONCERN GENERALLY ASSOCIATED WITH A PARTILUAR ITEM, A SUBJECT, OR PERSON? Overall the child is having problems in both school and at home. Specially having problems in math. He has trouble remembering facts and problem solving.

6. IN WHAT AREAS, UNDER WHAT CONDITION, DOES THIS STUENT DO BEST?. He does very well with computers and the teacher uses that as a behavior modification method to guide his behavior. He is doing exceptionally well in reading and language arts.

7. ASSISTANCE REQUESTED (OBSERVATION, MATERIAL, IDEAS, ETC) Assessments that are required 1-general intellectual performance 2- educational performance 3- performance related to specific disabilities general intellectual performance is a concern in all mild disabilities

ASSISTANCE PROVIDED: (MAY BE FOUR – MORE – OR LESS)

He is egilible for Special Education services for students with behavior disorders. His IEP will address his needs in the areas of behavior and mathematics.

=== ACTIVITY #1 for Chapter 2 Done in Class (Lorraine and Kerry) **William and the Challenges of Second Grade** ACTIVITY #2 for Chapter 2 Done at Home (Lorraine only) **William** 1. The assessment team begins to collect information about William. It is determined that you and the psychologist will interview William's mother together. What questions would you have for her regarding William's strengths, interests, and challenges? The questions I would pose to William's mother would be: 1. In his home environment what are his strengths and weaknesses? What does he like to do? 2. How are his communications skills at home and his behavior with his siblings/family members? 3. Does he follow your rules without major outbursts? 4. What are his challenges at home? 5. Have you asked your Doctor (or other outside sources) about his inability to concentrate, stay on task, control his outbursts? 2. In order to provide support to the classroom teacher, Ms. Trapp, you observe William in the classroom for a thirty-minute period during language arts instruction and later for a thirty-minute period during math instruction. What information could you glean from this type of observation? William seems to engage nicely within his language arts instruction. He is passing on an average level and do not see any problems. On the other hand, in his math instruction course, I see he is frustrated and struggling. This frustration is causing outbursts, out of seat interaction and fighting with his peers. His behavior is reflective of his need to release his pent up frustration. He is not finishing up his in class assignments, he is off task, has problems with computation and his critical thinking skills in math is not up to par to say the least. He is in dire need of assistance. 3. Williams' mother asks you about the next steps in the IEP process and when William will start receiving services. She wants to know how long William will be receiving special education services. How would you respond to her concerns? First and foremost, she must be aware of the Federal laws mandating IEP for her child. It is imperative that she is aware of the process of IDEA, 2004. It takes a team of specialists to write it up, to observe his behavior and his competence in his studies. Then they will interpret their specific assessments, assign realistic goals for William and his teachers to follow, and whether assistive technology is needed in math, or the teacher simply needs to re-evaluate her method of teaching it to him. As for his behavioral issues, intervention from outside sources might need to assess William as well. If the team has determined through various assessments that William is eligible for services, she will find out either way within 30 days; then the special education will begin. ===

NICE JOB! 10/10 POINTS EARNED.
Lorraine PerilloSPEN 303 Z Essay Portion of Assignment 1. Why is there a need for different types of assessments? What problems would arise if only formal tests or informal measures were available? There is a need for different types of assessments because assessments have a fundamental role in identifying the students level of understanding of their specific course work but; some assessments are not specific enough to meet all the criteria needed to make a determination of the students learning needs. A more complex assessment may be required to achieve an insight to the individual student's learning development. There are problems that arise if only formal tests or informal measures were available. Formal testing and informal measures are done because they produce different academic results. Formal testing answers specific questions that can lead to adjustments in learning styles for individual students and be very detailed oriented as well as identify problems. Informal measures, are just that informal, the students abilities are observed via teacher observations, parent participation in questionnaire form, or viewing their portfolio of work done in class. There is a need for both types of assessments to get a handle on the students learning needs, which might need adjustment to the teachers strategies for teaching that particular student.

2. Explain why it is important that educational decisions about students with disabilities are made by teams, rather than by a single individual. First and foremost, it is a Federal Law, that it is a team effort to address the educational needs of a student with disabilities. IDEA 2004 (individuals with Disabilities Education Improvement Act of 2004) guarantees that students with disabilities shall receive a free, appropriate, public education in the least restrictive educational environment. Out of this Law, comes the IEP (Individual Education Plan) hence the team involvement in structuring the educational plan for that particular student. A single individual is not qualified to address all the issues of the disability and make a determination to what an appropriate learning strategy would be. One person can not have the expertise to assign the proper educational path for a student with any kind of disability, there can be various learning/physical/mental disabilities. It takes a team of professionals/experts in their field to guide the course of action, to getting that student the best education available as a team.

3. IDEA 2004, requires that teams take into consideration the student's involvement with (and progress in) the general education curriculum. What are the implications of this requirement for general education teachers? Are they likely to become more involved in planning programs for students with disabilities? IDEA 2004 set the stage for inclusion in the classroom. All children learn together in the same environment. The students with disabilities in the classroom have unique academic, functional and developmental needs and the teachers need to address all of these issues with the assistance of the tem members. Without a doubt, 'regular' education teachers are going to have to step-to-the plate and take on added responsibilities to ensure all students are learning at their own pace at their cognitive level of their specific academic abilities. Their lesson plans are going to have to be revamped to address the educational learning needs of the students with disabilities as well as the other students without disabilities. 4. Although federal special education laws require that assessment procedures be non biased, bias does happen. What are some of the reasons for bias, and what can be done to improve current practices? Bias in Gender, Race and Socioeconomic happens in school districts across the United States. I have found some examples for each: Gender Bias: "Sitting in the same classroom, reading the same textbook, listening to the same teacher, boys and girls receive very different educations." (Sadker, 1994) In fact, upon entering school, girls perform equal to or better than boys on nearly every measure of achievement, but by the time they graduate high school or college, they have fallen behind. (Sadker, 1994) However, discrepancies between the performance of girls and the performance of boys in elementary education leads some critics to argue that boys are being neglected within the education system:

Race Bias: Black children are almost three times more likely than white children to be labeled mentally retarded, forcing them into special education classes where progress is slow and trained teachers in short supply, according to reports released Friday by the Civil Rights Project at Harvard University Socioeconomic Bias: Lipsky & Gartner, 1980 argue that the costs might outweigh the benefits of Special Education and that many students might be better off served in Regular Education. Further, others have asserted that some students, particularly those from minority and low income backgrounds are being inappropriately classified and placed in Special Education (Cummins, 1984) It seems like a simple response to eliminating bias in the Special Education classroom, teachers need to explore alternatives to testing and assessment and avoid labeling. 5. Why is it important to plan educational programs based on individual student profiles rather than based on diagnosed conditions such as mental retardation or autism? Individual student profiles indicate their cognitive abilities from different sources (teacher interaction, observations, testing, assessments, measurements and parent responses), just like their counterparts in the 'regular' classroom, void of their medical problems.

SPEN 303Z Lorraine PerilloChapter 1: Key Terms: **Assessment:** A series of questions an instructor presents to students to see if they understand what was taught. **Measurement:** Collecting information relative to some established rule or standard. **Test:** A special form of assessment to see if a goal was obtained. **Bias in Assessment:** The assessment may not be useful, it depends on the accuracy of the 'tool' used and the skill of the examiner using them. **Computer Adaptive Testing (CAT):** The computer selects the questions to be asked of the examinee and is based on the examinee's ability to answer them. (i.e. hard question asked, if answered correctly the next question the computer generates will be harder, on the other hand, if an easy questions is not answered correctly the following question will be the same type, not harder). (i.e. of a CAT test, the Praxis!)

Multiple Choice: 1. B 6. B 2

.#2 SHOULD BE D
D 7. C 3. C 8.

#8 SHOULD BE D
D 4. D 9. D 5. C 10. C True or False: 1. T 6. F 2. F 7. F 3. F 8. F 4. T 9. T 5. T 19. F