Research Writing

Validity and reliability in research/Types/Three attributes

What is validity and reliability in research?

Assessing the validity and reliability of the research is essential to ensure that the data collection instruments and the information collected are consistent and accurate when it comes to obtaining the insights derived from the analysis of the variables of a study.

Therefore, in this article we will introduce you to what these concepts consist of and what are the most used processes to evaluate them.

What is validity and reliability in research?

Research validity and reliability are concepts used to assess the quality of a study, and are primarily used in quantitative research to indicate the extent to which a method, technique, or test measures something effectively. 

Validity is defined as the extent to which a concept is accurately measured, for example in a quantitative study. 

Reliability refers to the extent to which a research instrument consistently achieves the same results if it is used in the same situation repeatedly. 

Taking into account the validity and reliability of data collection tools is important when conducting or critiquing an investigation, since the level of certainty that can be obtained from the results and conclusions of a study will depend on these.

Types of validity of research

To corroborate the validity of an investigation, three main criteria can be taken into account: content validity, construct validity and criterion validity. Next we will present what each one consists of:

Content validity

Here the ideal is to cover all the content regarding the variable. Through content validity, the aim is to answer whether the entire field related to the variable or the construct that has been designed to be measured is covered through the chosen instrument. 

Construct validity

Construct validity refers to whether inferences can be made about test scores related to the concept studied. The tests that are carried out to demonstrate the validity of the construct type are: 

  1. Homogeneity: It means that the instrument measures a single construct.
  2. Convergence: Occurs when the instrument measures concepts similar to those of other instruments. However, if similar instruments are not available, this will not be possible.
  3. Evidence of the theoryIt is evident when the behavior is similar to the theoretical propositions of the construct measured in the instrument. 

Criterion validity

Criterion validity refers to any other instrument that measures the same variable. Correlations can be made to determine the extent to which different instruments measure the same variable. Criterion validity is measured in three ways: 

  1. Convergent validity: Shows that an instrument is highly correlated with instruments that measure similar variables.
  2. Divergent validity: Shows that an instrument is poorly correlated with instruments that measure different variables. In this case, for example, there should be a low correlation between an instrument that measures motivation and another that measures self-efficacy.
  3. Predictive validity: Refers to the fact that the instrument must have high correlations with future criteria.

Three attributes of research reliability

Now we will present the three attributes that help to corroborate the reliability of an investigation:

1. Homogeneity or internal consistency

Homogeneity or internal consistency is assessed using item-total correlation, divided reliability, the Kuder-Richardson coefficient, and Cronbach’s alpha or α coefficient. 

Reliability by parts

Here you have to divide the results of a test or instrument in half and calculate the correlations comparing both halves. Strong correlations indicate high reliability , while weak correlations indicate that the instrument may not be reliable. 

Kuder-Richardson coefficient 

The Kuder-Richardson test is a process in which the mean of all possible combinations of divided halves is determined and a correlation between 0-1 is generated. 

This test is more accurate than the divided halves test, but can only be taken on questions with two answers (for example, yes or no, 0 or 1). 

Cronbach’s alpha coefficient

Cronbach’s α is the most widely used test to determine the internal consistency of an instrument. In this test, the average of all correlations in each split blade combination is determined. 

Instruments with questions that have more than two answers can be assessed with this test. The result of Cronbach’s α is a number between 0 and 1. An acceptable reliability score is one that is equal to or greater than 0.7. 

2. Stability

Stability is checked by test-retest reliability tests performed in parallel or alternately. Test-retest reliability is measured if an instrument is administered to the same participants more than once in similar circumstances. 

A statistical comparison is made between the scores of the participants in each of the times they have taken the test. This helps us to know the reliability of the instrument. 

3. Equivalence

Equivalence is assessed through inter-rater reliability. This test includes a process to qualitatively determine the level of agreement between two or more observers. 

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

CAPTCHA


Back to top button