Saturday, January 27, 2024

Things to consider before administering athletic testing part one.

    Athletic testing serves two purposes: 1. To see if the athlete (or whomever) has the athletic talent to play the sport, and 2. To see where the athlete can improve. The Army uses similar techniques in our usage of our unique fitness test. You hear troops all the time. I got to work on my Standing Power Throw, the Plank, the two-mile run, etc. Training is tweaked in hopes of improving in that particular area.

    When testing two things have to apply. First, the test has to be valid (measure what is supposed to measure) and be reliable (repeatable test). Imagine taking a test that only covers muscular endurance and aerobic fitness or not being able to repeat the test due to equipment shortage or environmental conditions.

    Looking a validity of a test encompasses a few factors. Construct validity is a fancy way of saying an overall validity of the particular test. Let's say you're looking to measure a person's max speed. Would you have them run a mile for a test? doubtful. a 50-meter sprint is more applicable making it suitable for construct validity. Face validity is the appearance of a test to measure what it is supposed to measure by laymen. At first glance a 400 m sprint test measures max speed from an outside perspective. A deeper look into a 400 m sprint measures more anaerobic than true max speed. Content validity on the other hand is the appearance of a test from an expert. An expert knows the metabolic demand of a 400m sprint is very different from a 50-meter sprint to measure max speed. Criterion-reference validity is the statistical results of a test and how it measures the same ability. 50-meter sprint meets the criteria of criterion-reference for max speed. Criterion-reference validity has three subcomponents: concurrent, convergent and predictive. Concurrent validity refers to the test scores that are associated with other tests. A 50m sprint and a 40-yard dash produce similar results (max speed). Convergent validity is how a particular test measures to the "Gold Standard" of a test. For example, measuring aerobic fitness the gold standard is in a lab hooked up to equipment to get a Vo2 max. An acceptable test for aerobic fitness could be a 12 min run. Testors may opt for a 12 min run due to lack of proper equipment or the medical equipment needed for a true Vo2 max is too expensive. Having experts in a lab for a true Vo2 max is a luxury. Having a coach with a stopwatch to measure a 12 min run is more accessible. Lastly predictive validity is how a test correlates to future performance within that particular sport. The NFL combine is the best example. Football players from around the country are tested during the combine and team decision makers make predictions on how well that player will perform during the NFL.

    Reliability is how consistent or reliable a test is. Reliability also has subcomponents. A test-retest is the correlation of scores by two different administers. Typical error measurement is simply the variation of scores by different athletes (gender, age, training history play factors into this) and equipment (Think what the University of Alabama football program has compared to South Alabama equipment wise). Intrasubject is the lack of consistency by athletes from test to test. We see this in the military as one test a troop crushes a test while six months later, they bolo. Interrater reliability speaks to who is administering the test. Factors such as the same standard for testing (using a stopwatch for one test then using electronic timing for another), trained vs untrained administers (experienced vs unexperienced). Or how motivated an administrator is. Some would be more enthusiastic about administering a test vs someone who shows zero enthusiasm. Interrater speaks to the individual graders. Is one grader more lenient? Do they not count repetitions because they are distracted? A final factor is considering the test itself. Was it a bad test?  

   

No comments:

Post a Comment