The first step in creating a certification examination is to define the area of content that is to be measured. This is done through a job analysis. When developing the C-SAPA examination, invitations were put out to all known practicing SAPAs to provide a list of tasks performed by them. These tasks were organized into categories and the tasks and categories reviewed by the subcommittee of SAPAA charged with creating the examination (the precursor of SAPACC). These lists were turned into a survey which was given to a sample of practicing SAPAs, asking them to rate the frequency and importance of each task and category. The results of this survey defined the areas of content and the appropriate weighting of each area (number of items) that are tested in the SAPACC examination. The SAPACC examination comprehensively measures the jobs of SAPAs in the proportions defined in the survey by practicing SAPAs. From the outset, the C-SAPA examination was designed to be an examination for individuals with some knowledge of all modes, as well as individuals practicing in the private sector. This necessitated a broad approach to defining the job of SAPAs and of the content of the C-SAPA examination. The job analysis is reviewed every five years to assure that the test continues to measure the actual practice of C-SAPAs
Once the job of SAPAs was defined and the relative weights of each area of their jobs were specified, the content of the test needed to be created. Since the majority of the job of SAPAs is mental work, the most appropriate format for the test was to use multiple choice questions. An invitation was put out to all SAPAs registered with SAPAA requesting items. Large numbers of items were created, including many duplicate items. All items were reviewed by SAPACC and the regulation for each question was consulted. The reference for each question is retained in a notes section for each item and can be consulted if there are any questions about the validity of an item. A set of questions was obtained and questions referring to specific federal regulations were reviewed by a federal official in that area of the government. This method of reviewing and referencing questions assured that all items used on the SAPACC examination are appropriate for the examination and are correctly keyed. The examination is constantly being updated. All C-SAPAs are invited to submit items for use in the examination, although in practice, the members of SAPACC provide the bulk of the new items. Any items submitted for use in the examination are reviewed by a committee of C-SAPAs (SAPACC) who review modify the item as necessary, verifying and referencing the item with the appropriate federal regulation. For an item to be used on the C-SAPA examination, it must be unanimously approved by SAPACC. SAPACC holds item writing and review meetings annually.
The cut score for the SAPACC examination is set using an empirical method call the Angoff method. In this method, a group of subject matter experts (SMEs) defined minimum competence for SAPAs. Since certification examinations are to assure that individuals who are certified are competent to practice, the first step is to define competence. Minimum competence is the lowest level of ability that can be considered competent in a field. Note, this is not a low level of ability, but the level of ability necessary to practice independently. The SMEs rated each question, estimating the percentage of minimally competent that would get each item correct. When the SMEs disagreed (were more than thirty points apart) they discussed their reasons for their rating and then re-rated. The average from all the judges becomes the Angoff, or item difficulty value, of each item. The averages for all the judges on all the questions are then averaged together to create the cut score. The candidates are further given the benefit of doubt by subtracting one Standard Error of Measure (SEM - an estimate of the amount of variability in scores that is attributable to variability in the test) to create a final cut score. If the SMEs in the cut score workshop did not feel a particular item was correct or appropriate, the item was removed from the item bank and not used on the test. By reviewing items in this manner, all items in the test have been reviewed by two separate committees of C-SAPAs and verified correct.
Administration and Scoring the examination
The SAPACC examination is administered using strict procedures to assure that cheating is extremely unlikely. Candidates are given ample time to answer all questions (four hours for 150 items) to assure that they do not miss any questions due to time pressures. Tests are securely shipped to and from the site via traceable courier. Before scanning, all answer sheets are examined to assure that no questions are missed due to stray marks. The examinations are scanned twice and electronically compared. A random sample of answer sheets are hand scored, as well as the tests that are at or near the cut score.
After the tests are scored, any items that have questionable item statistics are reviewed again in light of the item statistics. Any items that are indicated as questionable according to candidate comments are also reviewed. If a question is found to be questionable following this second review then it is removed from scoring.
In scoring the examination, the test statistic that is used to estimate the amount of variability in one person's score that could be accounted for by the test itself is known as the Standard Error of Measure (SEM). People with identical abilities who took the C-SAPA test would expect to have there scores fall between one SEM of their obtained score 95% of the time. SAPACC allows this variability to be credited to the candidates when one SEM is subtracted from the Angoff derived cut score to create the final cut score. Individuals who score above the cut score passed and those below do not. Those who do not are provided feedback on which areas of the test they had difficulty with. Since the SAPACC examination is designed to cover a broad range of practice, individuals who do not pass a particular administration of the examination should not become discouraged, but may need to study areas of practice that they are not usually involved with. Because of the strict procedures used is creating the examination, individuals who have passed it have something of value and something to be proud of.
Copyright © 2005, PES