||A major contributor to this article appears to have a close connection with its subject. (March 2015)|
In its broadest sense, e-assessment is the use of information technology for any assessment-related activity. This definition embraces a wide range of student activity ranging from the use of a word processor to on-screen testing. Due to its obvious similarity to e-learning, the term e-assessment is becoming widely used as a generic term to describe the use of computers within the assessment process. Specific types of e-assessment include computerized adaptive testing and computerized classification testing. E-assessment can also refer to e-marking.
E-assessment can be used not only to assess cognitive and practical abilities but anxiety disorders, such as social anxiety disorder, i.e. SPAI-B. Cognitive abilities are assessed using e-testing software, while practical abilities are assessed using e-portfolios or simulation software.
An e-testing system designed to focus on lower level associations comprises two components: (1) an assessment engine; and (2) an item bank. An assessment engine comprises the hardware and software required to create and deliver a test. Most e-testing engines run on standard hardware so the key characteristic is the software's functionality. There is a wide range of software packages. The software does not include the questions themselves; these are provided by an item bank. Once created, the engine uses the item bank to generate a test. Traditional paper-and-pencil testing is similar, but the test is pulled from the bank at only one time, when it is sent to publishing.
The creation of the item bank is more costly and time consuming than the installation and configuration of the assessment engine. This is due to the fact that assessment engines can be bought "off the shelf," whereas an item bank must be developed for each specific application.
An e-assessment system designed to focus on more sophisticated forms of knowledge requires some sort of interactive activity and a system for inviting students to reason or solve problems around that activity. One influential program of research is known as Evidence Centered Design, or ECD. ECD involves the use of Bayesian Inference Nets to create a sophisticated model of student cognition, and a set of activities or problems that students work on that allow the system to estimate the individuals understanding of the particular domain.
E-assessment is becoming widely used. It has many advantages over traditional (paper-based) assessment. The advantages include:
- lower long-term costs
- instant feedback to students
- greater flexibility with respect to location and timing
- improved reliability (machine marking is much more reliable than human marking)
- improved impartiality (machine marking does not 'know' the students so does not favour nor make allowances for minor errors)
- greater storage efficiency - tens of thousands of answer scripts can be stored on a server compared to the physical space required for paper scripts
- enhanced question styles which incorporate interactivity and multimedia.
There are also disadvantages. E-assessment systems are expensive to establish and not suitable for every type of assessment (such as extended response questions). The main expense is not technical; it is the cost of producing high quality assessment items - although that cost is identical when using paper-based assessment.
It has also been noted that in regards to university level work, providing electronic feedback can be more time-consuming than traditional assessments, and therefore more expensive.
A project was published in the mid-1980s which included a detailed review and analysis of the literature of pre mid 1980’s E-assessment systems. The advantages and disadvantages of E-assessments using personal computers in an elementary school grades 2 through 6 was also examined. The project included an examination of both test validity and test reliability of a personal computer administration in contrast to a paper and pencil administration of the Peabody Picture Vocabulary Test-Revised (PPVT-R). During the computer administration the students’ only interaction with an adult occurred when they were walked from their classroom to the school library where each was lead to a table with the computer. Each student was informed that the computer would administer the directions. In addition the students were informed that if they would like the directions or word repeated, they should press the space bar on the computer keyboard. The PPVT-R software program verbally administered the test instructions and administered the visual sample test items. After each student met the standard for administering the PPVT-R, the software program followed the PPVT-R standard test instructions, verbally administered the stimulus word while displaying the visual choices. The software administration included offering, when appropriate, verbal praise to the child for correct responses and informing each student when the test was completed.
The E-assessment project included a detailed cost analysis of the PPVT-R personal computer system hardware configuration as well as development of the PPVT-R personal computer software. The project compared the cost of a personal computer system that administered the PPVT-R, scored the PPVT-R, and wrote the PPVT-R score report against the cost of an individual administering, scoring, and writing the PPVT-R score report. In addition, the project completed an analysis of gender, age, and prior access to personal computers to assess which, if any, of those factors were relevant to students’ performance on either the PPVT-R personal computer administration or the PPVT-R paper and pencil administration. Student preferences and the reason for their preference for either the personal computer administration or the paper and pencil administration of the PPVT-R were recorded. Advantages and disadvantages of the computer hardware configuration and software program were presented.
A project that included the review of the literature of E-assessment from the 1970s until 2010 examined the advantages and disadvantages of E-assessments of a student’s knowledge of a course curriculum by comparing and contrasted a paper and pencil test and a computer administered test which evaluated 227 students attending 12 classes of the Apprentice Medical Services Specialist Resident Course. Six classes containing a total of 109 students took the Block One Tests in the traditional paper and pencil form. Another six classes containing a total of 118 students took the same Block One Tests on computers
A detailed review of the literature regarding advantages and disadvantages of E-assessment for different types of tests for different types of students in different educational environment from childhood through young adulthood was completed in 2010.
The new frontier of E-assessments include brain-computer interface test batteries which simultaneously measure various physiological functions of the nervous system at the same time that neurocognitive tests are administered (i.e., recording electroencephalogram (EEG) signal data while the individual is performing a cognitive task, and then processing the EEG signal data determining reaction time and accuracy data for measuring cognitive performance of the individual. The E-assessment brain-computer interface test batteries compare and contrast performance between individuals who do not have a neurological disorder, individuals who do have a neurological disorder, as well as evaluation of effectiveness of interventions designed to intervene in individuals with neurological progressive disorders.
The best examples of E-assessment follow a Formative Assessment structure and are called "Online Formative Assessment". This involves making an initial formative assessment by sifting out the incorrect answers. The author/teacher will then explain what the pupil should have done with each question. It will then give the pupil at least one practice at each slight variation of sifted out questions. This is the formative learning stage. The next stage is to make a Summative Assessment by a new set of questions only covering the topics previously taught. Some will take this even further and repeat the cycle such as BOFA Online 11 plus papers which is aimed at the eleven plus exam set
In order to create a mechanism for the sharing of high quality assessment items, global standards have emerged. The IMS Question and Test Interoperability specification (QTI) provides a common format for describing and distributing question items across disparate systems.
Hand-held student response systems
An area of E-assessment that has seen extensive growth in recent years is the use of hand held student response devices (often referred to as clickers or voting devices). These allow a teacher to carry out whole-group assessments, polls and surveys quickly and easily. They use either radio or infra red to communicate with a central hub that is usually attached to a computer. In many school classrooms these devices may also be used in combination with an interactive whiteboard.
Note on terminology
Various terms are used to describe the use of a computer for assessment purposes. These include: 1. Computer-Assisted Assessment or Computer-Aided Assessment (CAA) 2. Computer-Mediated Assessment (CMA) 3. Computer-Based Assessment (CBA) 4. Computer-Aided Assessment 5. Online assessment
Although these terms are commonly used interchangeably, they have distinct meanings.
Computer Assisted/Mediated Assessment refers to any application of computers within the assessment process; the role of the computer may be extrinsic or intrinsic. It is, therefore, a synonym for e-assessment which also describes a wide range of computer-related activities. Within this definition the computer often plays no part in the actual assessment of responses but merely facilitates the capture and transfer of responses between candidate and human assessor.
Computer-Based Assessment refers to assessment which is built around the use of a computer; the use of a computer is always intrinsic to this type of assessment. This can relate to assessment of IT practical skills or more commonly the on screen presentation of knowledge tests. The defining factor is that the computer is marking or assessing the responses provided from candidates. It can be performed on an equivalent electronic device such as a cell phone or PDA. CBA systems enable educators and trainers to author, schedule, deliver, and report on surveys, quizzes, tests and exams. They may be a stand-alone system or a part of a virtual learning environment, possibly accessed via the World Wide Web.
Online assessment refers to assessment activity which requires the use of the internet. In reality, few high-stakes assessment sessions are actually conducted online in real-time, but the transfer of data prior to and after the assessment session is conducted via the internet. There are many examples of practice and diagnostic tests being run real time over the internet.
|This section requires expansion. (November 2014)|
E-assessment dates to the PLATO system (1960) from the University of Illinois. This was commercialized by Control Data Corporation (then owner of PLATO) in the 1970s, starting with an online testing system for National Association of Securities Dealers (now the Financial Industry Regulatory Authority), a private-sector regulator of the US securities markets. The system was developed by Michael Stein, E. Clarke Porter and PLATO veteran Jim Ghesquiere, in cooperation with NASD executive Frank McAuliffe. The testing business grew slowly and was ultimately spun off from CDC as Drake Training and Technologies (today Thomson Prometric) in 1990.
This was then transitioned off of mainframes to a LAN-based client-server architecture (with Novell), and the testing business exploded in the 1990s with increased networks, together with IT certifications and testing centers from various companies, including Novell and Microsoft. Further expansion was occasioned by Pearson VUE, founded by PLATO/Prometric veterans E. Clarke Porter, Steve Nordberg and Kirk Lundeen in 1994, who were one of the first to use the internet, and developed self-service test registration.
The computer-based testing industry has continued to grow, adding professional licensure and educational testing as important business segments.
- Cambridge Neuropsychological Test Automated Battery
- Computer-adaptive test
- Computerized classification test
- CDR Computerized Assessment System
- Grieve, Rachel; Padgett, Christine R.; Moffitt, Robyn L. (2016-01-01). "Assignments 2.0: The role of social presence and computer attitudes in student preferences for online versus offline marking". The Internet and Higher Education 28: 8–16. doi:10.1016/j.iheduc.2015.08.002.
- Lichtenwald, Terrance G. (1987). “An Investigation of the validity, reliability, and acceptance by children of a microcomputer administration of the Peabody Picture Vocabulary Test-Revised (PPVT-R),”
- Millsap, Claudette M. (2000). “by CM Millsap – 2000 Comparison of Computer Testing versus Traditional Paper and Pencil”
- Blazer, Christie (2010). “U.S. Department of .files.eric.ed.gov/fulltext/ED544707.pdf Information Capsule,” Research Services, Volume 0918.
- Simon, Adam Jay. (2014). “Brain-computer interface test battery for the physiological assessment of nervous system health US 20120150545 A1”
- Asuni, Nicola. "TCExam :: Computer-Based Assessment". Retrieved 2008-07-15.
- Gomersall, Bob (2005-12-10). Practical implementation of e-testing on a large scale, and implications for future eassessment and e-learning. Shipley, West Yorkshire, UK. Retrieved 2007-10-01.
- Scheuermann, Friedrich; Ângela Guimarães Pereira (2008-04-01). Towards A Research Agenda On Computer-Based Assessment (PDF). Luxembourg, Luxembourg. Retrieved 2008-07-15.
- Scheuermann, Friedrich; Julius Björnsson (2008-04-01). The Transition to Computer-Based Assessment - New Approaches to Skills Assessment and Implications for Large-scale Testing (PDF). Luxembourg, Luxembourg. Retrieved 2009-04-02.
- Scottish Qualifications Authority (2008). "SOLAR White Paper" (PDF). Glasgow, UK. Retrieved 2008-02-15.
- Laumer, S., Stetten, A. & Eckhardt, A. (2009) E-Assessment. Business & Information Systems Engineering, 1 (3), 263–265. doi: 10.1007/s12599-009-0051-6.
||This article's use of external links may not follow Wikipedia's policies or guidelines. (January 2011)|
- The e-Assessment Association is a professional body for e-Assessment.
- International Annual Conference on e-Assessment.
- Transforming Assessment has a rage of examples of e-assessment in web 2.0 and virtual world learning environments (Based at the University of Adelaide, Australia)
- e-assessment section at JISC, UK.
- CAA Centre still offers the most to-the-point treatment of Computer-Based testing for use in Higher Education.
- eAssessment at the Open University, UK shows how to use an opensource computer-assisted assessment (CAA) system called OpenMark designed to mark less structured questions.