AIOU Assignment BEd 1.5 Year 8628 Assessment in Science Education Assignment 1

AIOU Assignment BEd 1.5 Year 8628 Assessment in Science Education Assignment 1

Note: For Other Assignment of BEd 1.5 Year Please click on the Links Below:
AIOU Assignment BEd 1.5 Year 8628 Assessment in Science Education Assignment 1 BEd MEd Assignments

Q.1 a) Write down the advantages and disadvantages of each mode of assessment. 

Advantages and Disadvantages of Assessment Methods

Different methods of assessment have different strengths and weaknesses. You can match assessment methods to different student outcomes you have specified. The table below offers several examples.
Presentations 

Best Application: Content knowledge and skills.
Advantages: Opportunities for authentic contexts. Allows students to demonstrate their work to an authentic audience. Allows for the integration of complex skills.
Disadvantages: Difficult to set up and administer, especially with a large number of students.

Written Products 
Best Application: Content knowledge. Some content skills.
Advantages: Allows students to work over an extended period of time to incorporate revisions. Allows for student craftsmanship, pride, and personal embellishment.
Disadvantages: Difficult to assess individual contributions when the product is a group product. Judging what has been learned is not always evident from looking at products.

Tests 
Best Application: Content knowledge.
Advantages: Allows for a standardized administration to large groups of students. Useful for assessing individual students.
Disadvantages: Difficult to assess skills through paper-and-pencil measures.

Self-Report 
Best Application: Habits of mind.
Advantages: Allows for teacher to assess attitudes, reflections, and thinking processes of students. Allows students to identify the benefits of project work; good for identifying unanticipated consequences.
Disadvantages: Difficult to establish reliable criteria.
*********************************************************************************

b) Explain in detail the limitations of assessment. (10)

Assessment is the gathering of information in the form of data. Students' conceptual knowledge and skill levels are measured and assigned a grade in the form of a number or letter. Concepts are what students know about a topic, and skills are what students can do. An evaluation is then made as a way to judge student achievement. Administrators also equate student assessment as a method of measuring teacher accountability.

The limitations of assessment

  • Assessments may have a negative effect on student motivation, particularly for students performing below grade level.
  • Careless implementation of assessments may have negative consequences, especially when the needs of special education students are not considered. Using only a written formal assessment does not provide an overall picture of student achievement.
  • Students that perform better with oral and visual skills or who display superior creativity are at a disadvantage. Basing teacher effectiveness on standardized test scores may encourage teachers to narrow the curriculum to teach to the test.
  • While it is unclear whether alternative assessments are effective, what is clear is that this debate will not be going away any time soon.

*********************************************************************************

Q.2 Describe in detail three domains of educational objectives. (20)

Three Domains of Learning – Cognitive, Affective, Psychomotor

The Original Cognitive or Thinking Domain –
Based on the 1956 work, The Handbook I-Cognitive Domain, behavioral objectives that dealt with cognition could be divided into subsets. These subsets were arranged into a taxonomy and listed according to the cognitive difficulty — simpler to more complex forms. In 2000-01 revisions to the cognitive taxonomy were spearheaded by one of Bloom’s former students, Loran Anderson, and Bloom’s original partner in defining and publishing the cognitive domain, David Krathwohl. Please see my page entitled Anderson and Krathwohl – Bloom’s Taxonomy Revised for further  details.

Remember while it is good to understand the history of the older version of this domain, the newer version has a number of strong advantages that make it a better choice for planning instruction today. One of the major changes that occurred between the old and the newer updated version is that the two highest forms of cognition have been reversed. In the older version the listing from simple to most complex functions was ordered as knowledge, comprehension, application, analysis, synthesis, and evaluation. In the newer version the steps change to verbs and are arranged as knowing, understanding, applying, analyzing, evaluating, and the last and highest function, creating.

Taxonomies of the Cognitive Domain

Bloom’s Taxonomy 1956
1. Knowledge: Remembering or retrieving previously learned material. Examples of verbs that relate to this function are:

  • know identify relate list
  • define recall memorize repeat
  • record name recognize acquire


2. Comprehension: The ability to grasp or construct meaning from material. Examples of verbs that relate to this function are:
restate locate report recognize explain express
identify discuss describe discuss review infer
illustrate interpret draw represent differentiate conclude

3. Application: The ability to use learned material, or to implement material in new and concrete situations. Examples of verbs that relate to this function are:

  • apply relate develop translate use operate
  • organize employ restructure interpret demonstrate illustrate
  • practice calculate show exhibit dramatize
4. Analysis: The ability to break down or distinguish the parts of material into its components so that its organizational structure may be better understood.Examples of verbs that relate to this function are:


  • analyze compare probe inquire examine contrast categorize
  • differentiate contrast investigate detect survey classify deduce
  • experiment scrutinize discover inspect dissect discriminate separate


5. Synthesis: The ability to put parts together to form a coherent or unique new whole. In the revised version of Bloom’s synthesis becomes creating and becomes the last and most complex cognitive function. Examples of verbs that relate to the synthesis function are:

  • compose produce design assemble create prepare predict modify tell
  • plan invent formulate collect set up generalize document combine relate
  • propose develop arrange construct organize originate derive write propose


Anderson and Krathwohl’s Taxonomy 2001

1. Remembering: Recognizing or recalling knowledge from memory. Remembering is when memory is used to produce or retrieve definitions, facts, or lists, or to recite previously learned information.

2. Understanding: Constructing meaning from different types of functions be they written or graphic messages, or activities like interpreting, exemplifying, classifying, summarizing, inferring, comparing, or explaining.

3. Applying: Carrying out or using a procedure through executing, or implementing. Applying relates to or refers to situations where learned material is used through products like models, presentations, interviews or simulations.

4. Analyzing: Breaking materials or concepts into parts, determining how the parts relate to one another or how they interrelate, or how the parts relate to an overall structure or purpose. Mental actions included in this function are differentiating, organizing, and attributing, as well as being able to distinguish between the components or parts. When one is analyzing, he/she can illustrate this mental function by creating spreadsheets, surveys, charts, or diagrams, or graphic representations.

5. Evaluating: Making judgments based on criteria and standards through checking and critiquing. Critiques, recommendations, and reports are some of the products that can be created to demonstrate the processes of evaluation. In the newer taxonomy, evaluating comes before creating as it is often a necessary part of the precursory behavior before one creates something.

The Affective or Feeling Domain:

 Like cognitive objectives, affective objectives can also be divided into a hierarchy (according to Krathwohl). This area is concerned with feelings or emotions. Again, the taxonomy is arranged from simpler feelings to those that are more complex. This domain was first described in 1964 and as noted before is attributed to David Krathwohl as the primary author.

1. Receiving: This refers to the learner’s sensitivity to the existence of stimuli – awareness, willingness to receive, or selected attention.

  • feel sense capture experience 
  • pursue attend perceive 


2. Responding: This refers to the learners’ active attention to stimuli and his/her motivation to learn – acquiescence, willing responses, or feelings of satisfaction.

  • conform allow cooperate 
  • \contribute enjoy satisfy 


3. Valuing: This refers to the learner’s beliefs and attitudes of worth – acceptance, preference, or commitment. An acceptance, preference, or commitment to a value.

  • believe seek justify 
  • respect search persuade 


4. Organization: This refers to the learner’s internalization of values and beliefs involving (1) the conceptualization of values; and (2) the organization of a value system. As values or beliefs become internalized, the leaner organizes them according to priority.

  • examine clarify systematize 
  • create integrate 


5. Characterization – the Internalization of values:
This refers to the learner’s highest of internalization and relates to behavior that reflects (1) a generalized set of values; and (2) a characterization or a philosophy about life. At this level the learner is capable of practicing and acting on their values or beliefs.

he Psychomotor or Kinesthetic Domain:
Psychomotor objectives are those specific to discreet physical functions, reflex actions and interpretive movements. Traditionally, these types of objectives are concerned with the physically encoding of information, with movement and/or with activities where the gross and fine muscles are used for expressing or interpreting information or concepts. This area also refers to natural, autonomic responses or reflexes.

It is interesting to note that while the cognitive taxonomy was described in 1956, and the affective in 1964, the psychomotor domain were not fully described until the 1970s. And while I have chosen to use the work of Anita Harrow here, there are actually two other psychomotor taxonomies to choose from — one from E. J. Simpson (1972) and the other from R.H. Dave (1970).

Reflex movements:
Objectives at this level include reflexes that involve one segmental or reflexes of the spine and movements that may involve more than one segmented portion of the spine as intersegment reflexes (e.g., involuntary muscle contraction). These movements are involuntary being either present at birth or emerging through maturation.

Fundamental movements:
Objectives in this area refer to skills or movements or behaviors related to walking, running, jumping, pushing, pulling and manipulating. They are often components for more complex actions.

Perceptual abilities:
Objectives in this area should address skills related to kinesthetic (bodily movements), visual, auditory, tactile (touch), or coordination abilities as they are related to the ability to take in information from the environment and react.

Physical abilities:
Objectives in this area should be related to endurance, flexibility, agility, strength, reaction-response time or dexterity.

Skilled movements:
Objectives in this area refer to skills and movements that must be learned for games, sports, dances, performances, or for the arts.
*********************************************************************************

Q.3 a) How can table of specifications be developed for assessment? How is it helpful for assessment? (10)

Assessment of Learning

Table of Specification:
The purpose of a Table of Specifications is to identify the achievement domains being measured and to ensure that a fair and representative sample of questions appear on the test. Teachers cannot measure every topic or objective and cannot ask every question they might wish to ask. A Table of Specifications allows the teacher to construct a test which focuses on the key areas and weights those different areas based on their importance. A Table of Specifications provides the teacher with evidence that a test has content validity, that it covers what should be covered.

Designing a Table of Specifications:
Tables of Specification typically are designed based on the list of course objectives, the topics covered in class, the amount of time spent on those topics, textbook chapter topics, and the emphasis and space provided in the text. In some cases a great weight will be assigned to a concept that is extremely important, even if relatively little class time was spent on the topic.
Three steps are involved in creating a Table of Specifications:
1) choosing the measurement goals and domain to be covered,
2) breaking the domain into key or fairly independent parts- concepts, terms, procedures, applications, and
3) constructing the table. Teachers have already made decisions (or the district has decided for them) about the broad areas that should be taught, so the choice of what broad domains a test should cover has usually already been made. A bit trickier is to outline the subject matter into smaller components, but most teachers have already had to design teaching plans, strategies, and schedules based on an outline of content.

How can the use of a Table of Specifications benefit your students, including those with special needs?
A Table of Specifications benefits students in two ways. First, it improves the validity of teacher-made tests. Second, it can improve student learning as well. A Table of Specifications helps to ensure that there is a match between what is taught and what is tested. Classroom assessment should be driven by classroom teaching which itself is driven by course goals and objectives. In the chain below, Tables of Specifications provide the link between teaching and testing.

Objectives Teaching Testing:
Tables of Specifications can help students at all ability levels learn better. By providing the table to students during instruction, students can recognize the main ideas, key skills, and the relationships among concepts more easily. The Table of Specifications can act in the same way as a concept map to analyze content areas. Teachers can even collaborate with students on the construction of the Table of Specifications- what are the main ideas and topics, what emphasis should be placed on each topic, what should be on the test? Open discussion and negotiation of these issues can encourage higher levels of understanding while also modeling good learning and study skills.

References: Assessments, grading student work, and analyzing student performance on individual test items or criteria.

Examples of Reliability Measures:

  • Inter-rater – Two separate individuals (for instance, instructor and TF, or peers) evaluate and score a subject’s test, essay, or performance, and the scores from each of the raters are correlated. The correlation coefficient is then used as an estimate of reliability. Several other statistics can also be calculated by instructors to compare the scores from two raters. For instance, Cohen’s kappa considers the amount of agreement that may occur between two raters as a result of chance.
  • Test-Retest – Individuals take the same test on separate occasions and the scores can be correlated by instructors, using the correlation coefficient as the estimate of reliability. Because individuals learn from tests, this approach should be sensitive to the amount of time and degree of learning between test administrations.
  • Parallel Forms – Two equivalent tests, measuring the same concepts, knowledge, skills, abilities, etc., are given to the same group of individuals, and the scores can be correlated by instructors. The correlation coefficient is the estimate of reliability. Instructors should note that designing two separate but identical tests can be very difficult.
  • Split-Half – One test is divided into two sets of items. An individual’s score on half of the test is correlated with their score on the other half of the test. This approach accounts for testing fatigue and gradual shifts in approach as the test was designed. Instructors can decide to split a test in many different ways (i.e. even versus odd, first versus last, etc.), but should be aware that the splitting method will influence the correlation coefficient.
  • Cranach’s Alpha – Cranach’s Alpha is the most commonly reported measure of reliability when analyzing Liker type scales or multiple choice tests. It is generally interpreted as the mean of all possible split-half combinations, or the average or central tendency when a test is split against itself. For reference, an alpha above .7 is typically considered acceptable. Cranach’s Alpha can be calculated by instructors in Excel or any other statistical software package.

*********************************************************************************

b) Develop one table of specification of any unit from science curriculum. (10)

Answer: Table of Specializations:

 A Table of Specifications is a two-way chart which describes the topics to be covered in a test and the number of items or points which will be associated with each topic. Sometimes the types of items are described as well.

The purpose of a Table of Specifications is to identify the achievement domains being measured and to ensure that a fair and representative sample of questions appear on the test. As it is impossible, in a test, to assess every topic from every aspect, a Table of Specifications allows us to ensure that our test focuses on the most important areas and weights different areas based on their importance / time spent teaching. A Table of Specifications also gives us the proof we need to make sure our test has content validity.

Tables of Specifications are designed based on:

  • Course objectives
  • Topics covered in class
  • Amount of time spent on those topics
  • Textbook chapter topics
  • Emphasis and space provided in the text 


A Table of Specification could be designed in 3 simple steps:
1. Identify the domain that is to be assessed
2. Break the domain into levels (e.g. knowledge, comprehension, application …)
3. Construct the table:
The more detailed a table of specifications is, the easier it is to construct the test The purpose of a Table of Specifications is to identify the achievement domains being measured and to ensure that a fair and representative sample of questions appear on the test. Teachers cannot measure every topic or objective and cannot ask every question they might wish to ask. A Table of Specifications allows the teacher to construct a test which focuses on the key areas and weights those different areas based on their importance. A Table of Specifications provides the teacher with evidence that a test has content validity, that it covers what should be covered.

How can the use of a Table of Specifications benefit your students, including those with special needs?
A Table of Specifications benefits students in two ways. First, it improves the validity of teacher-made tests. Second, it can improve student learning as well.

A Table of Specifications helps to ensure that there is a match between what is taught and what is tested. Classroom assessment should be driven by classroom teaching which itself is driven by course goals and objectives. In the chain below, Tables of Specifications provide the link between teaching and testing.

Objectives Teaching Testing:
Tables of Specifications can help students at all ability levels learn better. By providing the table to students during instruction, students can recognize the main ideas, key skills, and the relationships among concepts more easily. The Table of Specifications can act in the same way as a concept map to analyze content areas. Teachers can even collaborate with students on the construction of the Table of Specifications- what are the main ideas and topics, what emphasis should be placed on each topic, what should be on the test? Open discussion and negotiation of these issues can encourage higher levels of understanding while also modeling good learning and study skills.

Analyze results by level and content area:
If students getting all lower level questions but missing higher level, then you're not doing your job; if all have got answers to one unit but not another, may have to reteach that unit, etc.

Table of specification
MCQs
Time allowed 60 min
Total marks 50
Sr#    Contents                                           Number of MCQs
1        Viruses                                                          4
2        Types of viruses                                            4
3        Bacteria’s                                                      5
4        Types of bacteria                                           4
5        Diseases caused by bacteria                          5
6        Diseases caused by virus                              4
7        Pollution                                                        3
8        Skin issues                                                     3
9        Infections                                                       3
10      Diseases in gastro intestine l tract                 2
11      Habitats of bacteria                                       5
12      Habitats of virus                                            2
13      Infectious diseases in children                      1
14      Mortality rate of infectious diseases             4
15      Pollution hazards                                          1
          Total MCQs 50                                   Total marks 50
*********************************************************************************

Q.4 a) How can assessment of knowledge of ways and means of dealing with specifics be done? Explain your answer with examples. (10) 

Part: Although assessments are currently used for many purposes in the educational system, a premise of this report is that their effectiveness and utility must ultimately be judged by the extent to which they promote student learning. The aim of assessment should be “to educate and improve student performance, not merely to audit it” .To this end, people should gain important and useful information from every assessment situation. In education, as in other professions, good decision making depends on access to relevant, accurate, and timely information. Furthermore, the information gained should be put to good use by informing decisions about curriculum and instruction and ultimately improving student learning (Falk, 2000; National Council of Teachers of Mathematics, 1995).

Assessments do not function in isolation; an assessment’s effectiveness in improving learning depends on its relationships to curriculum and instruction. Ideally, instruction is faithful and effective in relation to curriculum, and assessment reflects curriculum in such a way that it reinforces the best practices in instruction. In actuality, however, the relationships among assessment, curriculum, and instruction are not always ideal. That synergy can best be achieved if the three parts of the system are bound by or grow out of a shared knowledge base about cognition and learning in the domain.

Purposes and Contexts of Use
Educational assessment occurs in two major contexts. The first is the classroom. Here assessment is used by teachers and students mainly to assist learning, but also to gauge students’ summative achievement over the longer term. Second is large-scale assessment, used by policy makers and educational leaders to evaluate programs and/or obtain information about whether individual students have met learning goals.

The sharp contrast that typically exists between classroom and large-scale assessment practices arises because assessment designers have not been able to fulfill the purposes of different assessment users with the same data and analyses. To guide instruction and monitor its effects, teachers need information that is intimately connected with the work their students are doing, and they interpret this evidence in light of everything else they know about their students and the conditions of instruction. Part of the power of classroom assessment resides in these connections. Yet precisely because they are individualized and highly contextualized, neither the rationale nor the results of typical classroom assessments are easily communicated beyond the classroom. Large-scale, standardized tests do communicate efficiently across time and place, but by so constraining the content and timeliness of the message that they often have little utility in the classroom. This contrast illustrates the more general point that one size of assessment does not fit all. The purpose of an assessment determines priorities, and the context of use imposes constraints on the design, thereby affecting the kinds of information a particular assessment can provide about student achievement.

Inevitability of Trade-Offs in Design
To say that an assessment is a good assessment or that a task is a good task is like saying that a medical test is a good test; each can provide useful information only under certain circumstances. An MRI of a knee, for example, has unquestioned value for diagnosing cartilage damage, but is not helpful for diagnosing the overall quality of a person’s health. It is natural for people to understand medical tests in this way, but not educational tests. The same argument applies nonetheless, but in ways that are less familiar and perhaps more subtle.

Multiple assessments are thus needed to provide the various types of information required at different levels of the educational system. This does not mean, however, that the assessments need to be disconnected or working at cross-purposes. If multiple assessments grow out of a shared knowledge base about cognition and learning in the domain, they can provide valuable multiple perspectives on student achievement while supporting a core set of learning goals. Stakeholders should not be unduly concerned if differing assessments yield different information about student achievement; in fact, in many circumstances this is exactly what should be expected. However, if multiple assessments are to support learning effectively and provide clear and meaningful results for various audiences, it is important that the purposes served by each assessment and the aspects of achievement sampled by any given assessment be made explicit to users.

Later in the chapter we address how multiple assessments, including those used across both classroom and large-scale contexts, could work together to form more complete assessment systems. First, however, we discuss classroom and large-scale assessments in turn and how each can best be used to serve the goals of learning.

Classroom Assessment
The first thing that comes to mind for many people when they think of “classroom assessment” is a midterm or end-of-course exam, used by the teacher for summative grading purposes. But such practices represent only a fraction of the kinds of assessment that occur on an ongoing basis in an effective classroom. The focus in this section is on assessments used by teachers to support instruction and learning, also referred to as formative assessment. Such assessment offers considerable potential for improving student learning when informed by research and theory on how students develop subject matter competence.

As instruction is occurring, teachers need information to evaluate whether their teaching strategies are working. They also need information about the current understanding of individual students and groups of students so they can identify the most appropriate next steps for instruction.

Moreover, students need feedback to monitor their own success in learning and to know how to improve. Teachers make observations of student understanding and performance in a variety of ways: from classroom dialogue, questioning, seatwork and homework assignments, formal tests, less formal quizzes, projects, portfolios, and so on.

Black and William (1998) provide an extensive review of more than 250 books and articles presenting research evidence on the effects of classroom assessment. They conclude that ongoing assessment by teachers, combined with appropriate feedback to students, can have powerful and positive effects on achievement. They also report, however, that the characteristics of high-quality formative assessment are not well understood by teachers and

BOX 6–1 Transforming Classroom Assessment Practices
A project at King’s College London (Black and William, 2000) illustrates some of the issues encountered when an effort is made to incorporate principles of cognition and reasoning from evidence into classroom practice. The project involved working closely with 24 science and mathematics teachers to develop their formative assessment practices in everyday classroom work. During the course of the project, several aspects of the teaching and learning process were radically changed.

One such aspect was the teachers’ practices in asking questions in the classroom. In particular, the focus was on the notion of wait time (the length of the silence a teacher would allow after asking a question before speaking again if nobody responded), with emphasis on how short this time usually is. The teachers altered their practice to give students extended time to think about any question posed, often asking them to discuss their ideas in pairs before calling for responses. The practice of students putting up their hands to volunteer answers was forbidden; anyone could be asked to respond. The teachers did not label answers as right or wrong, but instead asked a student to explain his or her reasons for the answer offered. Others were then asked to say whether they agreed and why. Thus questions opened up discussion that helped expose and explore students’ assumptions and reasoning. At the same time, wrong answers became useful input, and the students realized that the teacher was interested in knowing what they thought, not in evaluating whether they were right or wrong. As a consequence, teachers asked fewer questions, spending more time on each.

That formative assessment is weak in practice. High-quality classroom assessment is a complex process, as illustrated by research described in Box 6– 1 that encapsulates many of the points made in the following discussion. In brief, the development of good formative assessment requires radical changes in the ways students are encouraged to express their ideas and in the ways teachers give feedback to students so they can develop the ability to manage and guide their own learning. Where such innovations have been instituted, teachers have become acutely aware of the need to think more clearly about their own assumptions regarding how students learn.

In addition, teachers realized that their lesson planning had to include careful thought about the selection of informative questions. They discovered that they had to consider very carefully the aspects of student thinking that any given question might serve to explore. This discovery led them to work further on developing criteria for the quality of their questions. Thus the teachers confronted the importance of the cognitive foundations for designing assessment situations that can evoke important aspects of student thinking and learning. (See Bonior [1991] and Paranoid [1998]) for further discussion of the importance of high-quality teacher questions for illuminating student thinking.)

In response to research evidence that simply giving grades on written work can be counterproductive for learning (Butler, 1988), teachers began instead to concentrate on providing comments without grades—feedback designed to guide students’ further learning. Students also took part in self-assessment and peer-assessment activities, which required that they understand the goals for learning and the criteria for quality that applied to their work. In these ways, assessment situations became opportunities for learning, rather than activities divorced from learning.

There is a rich literature on how classroom assessment can be designed and used to improve instruction and learning (e.g., Falk, 2000; Neoga, 1995; Shepard, 2000; Staginess, 1997; Wiggins, 1998). This literature presents powerful ideas and practical advice to assist teachers across the K-16 spectrum in improving their classroom assessment practices. We do not attempt to summarize all of the insights and implications for practice presented in this literature. Rather, our emphasis is on what could be gained by thinking about classroom assessment in light of the principles of cognition and reasoning from evidence emphasized throughout this report.

Formative Assessment, Curriculum, and Instruction
How might the culture of classrooms be shifted so that students no longer feign competence or work to perform well on the test as an end separate from real learning. To accomplish this kind of transformation, we have to make assessment more useful, more helpful in learning, and at the same time change the social meaning of evaluation.

Shepard proceeded to discuss ways in which classroom assessment practices need to change: the content and character of assessments need to be significantly improved to reflect contemporary understanding of learning; the gathering and use of assessment information and insights must become a part of the ongoing learning process; and assessment must become a central concern in methods courses in teacher preparation programs. Shepard’s messages were reflective of a growing belief among many educational assessment experts that if assessment, curriculum, and instruction were more integrally connected, student learning would improve.

  • A clear view of the learning goals. 
  • Information about the present state of the learner. 
  • Action to close the gap.


Cognitively Guided Instruction and Assessment 
Carpenter, Enema, and colleagues have demonstrated that teachers who are informed regarding children’s thinking about arithmetic will be in a better position to craft more effective mathematics instruction.

Bottom of Form . Given a student’s solution to a problem, a classroom teacher can modify instruction in a number of ways:

  1. By posing a developmentally more difficult or easier problem;
  2. By altering the size of the numbers in the set; or
  3. By comparing and contrasting students’ solution strategies, so that students can come to appreciate the utility and elegance of a strategy they might not yet be able to generate on their own. For example, a student directly modeling a joining of sets with counters
*********************************************************************************

b) Develop items for the assessment of knowledge of universal and abstraction in a field. (10) 

Entity is part of a universal being, which embraces and transcends all reality.

Abstraction a fundamental moment of the cognitive process 

We can point out that abstraction is a process by which the human intellect draws universal concepts out of individual object, regardless of their spatial-temporal characteristics. Plato placed among the function of dialectics that of distinguishing an idea from another so that it can be accountable to itself and others. The dialectic consists in the interaction between two opposing theses or principles (symbolically represented in Plato's dialogues by two real people) and is used as an investigative tool of truth.

Aristotle believed that the intelligible forms existed only in the sensitive objects, as sources of their universal characteristics, and that the soul could, by abstraction, make them stand out in their purity so as to achieve the ability to know them.

Aristotle distinguishes three degrees of abstraction.
There is a physics abstraction, which captures what is characteristic of a number of material bodies. This form of abstraction does not rise above the physical world; it only detects a kind of common footprint to several concrete things. The second level of abstraction is the mathematical one that rises above the purely material level and identifies the immaterial characteristics common to the corporeal things. In other words the mathematical abstraction takes into account the real, under the geometrical and physical aspect. In the end the metaphysical abstraction that is independent from the general characteristics of the matter, rises above the extension of the bodies to capture the essential features, the immaterial aspect (that is to declare the truth) of what is.

Abstraction a fundamental moment of the cognitive process
We can point out that abstraction is a process by which the human intellect draws universal concepts out of individual object, regardless of their spatial-temporal characteristics. Plato placed among the function of dialectics that of distinguishing an idea from another so that it can be accountable to itself and others. The dialectic consists in the interaction between two opposing theses or principles (symbolically represented in Plato's dialogues by two real people) and is used as an investigative tool of truth.

Aristotle believed that the intelligible forms existed only in the sensitive objects, as sources of their universal characteristics, and that the soul could, by abstraction, make them stand out in their purity so as to achieve the ability to know them.

Dialectics has for Hegel two meanings closely related: in a first sense it is the process by which the Absolute is recognized in reality that, at first, appeared to it as alien or opposed, removing or conciliating precisely that opposition; in a second sense it is the process by which reality, overcoming the divisions, stays peaceful - as Hegel says – in the unity of the Whole. Of the latter, arranged in detail by Aristotle the term is used to demonstrate the operation that takes the intelligible into the sensible.
*********************************************************************************

Q.5 a) how can a science assess application objectives? (10)

Objectives of Science Teaching
Constructivist Teaching
Learning takes place when there is a change in the learner’s existing ideas, either by adding some new knowledge or by reorganizing what is already known. There are three useful approaches to constructive science teaching.

  1. Teacher demonstrates the unknown event
  2. Teacher leads discussion and identifies examples drawn from student experiences
  3. Students conduct the event and discuss the event

Characteristics of constructivist teaching

  • Prior awareness of the ideas that students bring to the learning situation
  • Clearly defined conceptual goals for learners
  • Use of teaching strategies which challenge or develop the initial ideas of students
  • Providing learners with opportunities to use the new ideas
  • Providing a classroom atmosphere that encourages students to suggest and discuss ideas
  • (1) a statement of what students will be able to do when a lesson is completed,
  • (2) the conditions under which the students will be able to perform the task, and
  • (3) The criteria for evaluating student performance.
  • While goals describe global learning outcomes, learning objectives are statements of specific performances that contribute to the attainment of goals. Learning objectives should help guide curriculum development, instructional strategies, selection of instructional materials, and development of assessments
*********************************************************************************

b) Develop five test items from general science to test application skills. (10)

Designing Science Assessments

In this report the committee has stressed the importance of considering the assessment system as a whole. However, as was discussed in

How Tests and Test Questions are developed

ETS develops assessments that are of the highest quality, accurately measure the necessary knowledge and skills, and are fair to all test takers. We understand that creating a fair, valid and reliable test is a complex process that involves multiple checks and balances.

That's why dozens of professionals — including test specialists, test reviewers, editors, teachers and specialists in the subject or skill being tested — are involved in developing every test question, or "test item." And it's why all questions (or "items") are put through multiple, rigorous reviews and meet the highest standards for quality and fairness in the testing industry.

Step 1: Defining Objectives Educators, licensing boards or professional associations identify a need to measure certain skills or knowledge. Once a decision is made to develop a test to accommodate this need, test developers ask some fundamental questions:

  • Who will take the test and for what purpose? 
  • What skills and/or areas of knowledge should be tested?
  • How should test takers be able to use their knowledge?
  • What kinds of questions should be included? How many of each kind?
  • How long should the test be?
  • How difficult should the test be? 


Step 2: Item Development Committees The answers for the questions in Step 1 are usually completed with the help of item development committees, which typically consist of educators and/or other professionals appointed by ETS with the guidance of the sponsoring agency or association. Responsibilities of these item development committees may include:

  • defining test objectives and specifications
  • helping ensure test questions are unbiased
  • Determining test format (e.g., multiple-choice, essay, constructed-response, etc.)
  • considering supplemental test materials
  • reviewing test questions, or test items, written by ETS staff
  • writing test questions


Step 3: Writing and Reviewing Questions Each test question — written by ETS staff or item development committees — undergoes numerous reviews and revisions to ensure it is as clear as possible, that it has only one correct answer among the options provided on the test and that it conforms to the style rules used throughout the test. Scoring guides for open-ended responses, such as short written answers, essays and oral responses, go through similar reviews.

Step 4: The Pretest After the questions have been written and reviewed, many are pretested with a sample group similar to the population to be tested. The results enable test developers to determine:

  • the difficulty of each question
  • if questions are ambiguous or misleading
  • if questions should be revised or eliminated
  • if incorrect alternative answers should be revised or replaced


Step 5: Detecting and Removing Unfair Questions To meet the stringent ETS Standards for Quality and Fairness (PDF) guidelines, trained reviewers must carefully inspect each individual test question, the test as a whole and any descriptive or preparatory materials to ensure that language, symbols, words, phrases and content generally regarded as sexist, racist or otherwise inappropriate or offensive to any subgroup of the test-taking population are eliminated.

ETS statisticians also can identify questions on which two groups of test takers who have demonstrated similar knowledge or skills perform differently on the test through a process called Differential Item Functioning (DIF). If one group performs consistently better than another on a particular question, that question receives additional scrutiny and may be deemed biased or unsatisfactory. Note: If people in different groups actually differ in their average levels of relevant knowledge or skills, a fair test question will reflect those differences.

Step 6: Assembling the Test After the test is assembled, it is reviewed by other specialists, committee members and sometimes other outside experts. Each reviewer answers all questions independently and submits a list of correct answers to the test developers. The lists are compared with the ETS answer keys to verify that the intended answer is, indeed, the correct answer. Any discrepancies are resolved before the test is published.
*********************************************************************************

Post a Comment

Previous Post Next Post