Using Responses to Likert-Type Items in Qualitative Research

On an orange background, four magnifying glasses are displayed

Using Likert-type items in qualitative research is both common and a topic of debate among researchers. American social psychologist Rensis Likert (1932) developed the Likert scale, a popular tool for measuring attitudes and opinions in “A Technique for the Measurement of Attitudes,” seeking to create a more reliable and valid method for measuring attitudes, preferences, and perceptions. He proposed five response alternatives: strongly approve, approve, undecided, disapprove, and strongly disapprove. Unlike previous methods, the Likert scale allowed for the summation of individual Likert-type item scores to create a composite score representing an overall attitude. This summative approach was a significant innovation, providing a more comprehensive measure of attitudes. Likert sought to quantify inherently qualitative, subjective, unobservable constructs, responses, or attitudes (Tanujaya et al., 2022).

In this paper, Likert-type items will refer to the individual items in an instrument rated using an ordered set of alternative responses. Similarly, Likert-type responses will refer to the actual choice made by the research participant, e.g., strongly agree, undecided, or disagree. Likert scale will refer to a group of four or more Likert-type items that a researcher may use to measure a latent variable of interest. Likert-type items provide a structured way to collect data, offering a clear framework for respondents to express their attitudes, opinions, and perceptions, facilitating a more straightforward analysis and comparison of responses (Doğan & Demirbolat, 2021; Dwivedi & Pandey, 2021). Researchers argue that Likert-type items and Likert scales are similar and can only be used one way, with some saying that the analysis is strictly quantitative and others arguing that qualitative analysis is possible (Tanujaya et al., 2022). Proudfoot (2022) discussed integrating qualitative and quantitative methods through hybrid thematic analysis, highlighting the flexibility in using Likert-type items in qualitative research. In this paper, we present some of the main arguments for and against their use qualitatively and quantitatively, with some examples from each.

Likert Scale

The Likert scale is a series or battery of a minimum of four or more mutually inclusive Likert-type items that are combined into a single composite score variable during the data analysis process (Tanujaya et al., 2022), implying an assumption of an underlying continuous variable (Doğan & Demirbolat, 2021; Kleinheksel & Summy, 2020). Likert scales refer to the sum or average of responses to multiple Likert-type items designed to measure a single construct, e.g., a scale measuring job satisfaction. The Likert scale revolutionized the measurement of attitudes by providing a straightforward and effective method for quantifying subjective opinions (Tanujaya et al., 2022). Its adoption and continued use across various disciplines underscore its impact on social science research despite ongoing debates about its limitations and best practices (Tanujaya et al., 2022).

Over time, researchers developed variations of the original Likert scale, including different numbers of response categories, e.g., 5-point, 7-point scales, and alternative response formats, e.g., frequency scales (Tanujaya, 2022). Likert scales provide insight into dimensions of an attitude or perception related to a phenomenon (Tanujaya et al., 2022). The argument for considering Likert-type responses as qualitative or quantitative hinges on the research goals and the nature of the data analysis. Quantitative arguments emphasize numerical analysis, aggregation, and statistical testing, while qualitative arguments focus on the subjective meaning, context, and thematic interpretation of individual responses.

Some researchers advocate for a mixed-methods approach, combining quantitative analysis for general trends with qualitative interpretation for deeper understanding. The context in which the Likert-type scale is used can determine whether a qualitative or quantitative approach is more appropriate. For example, in exploratory research, the qualitative aspect may be emphasized, whereas, in hypothesis-testing research, the quantitative aspect may be prioritized. Combining both perspectives can provide a comprehensive understanding of the data. Researchers may also analyze the collected responses to the Likert-type items separately, as items rather than as scales or dimensions (Tanujaya et al., 2022).

Likert-Type Items

Researchers analyze Likert scales and Likert-type items differently due to their distinct characteristics (Tanujaya et al., 2022). Likert-type items are single questions or statements that are mutually exclusive among each other. In analyzing Likert-type items, the researcher does not create a composite score (Doğan & Demirbolat, 2021). Responses to Likert-type items are ordinal, meaning they represent an order but the intervals between categories are not necessarily equal (South et al., 2022); 

Likert-type responses may be qualitative, because the responses and their interpretation are inherently subjective. Each response represents an individual’s subjective experience, belief, or feeling, which can provide qualitative insights. Researchers can describe responses in terms of what they mean for the respondents, focusing on understanding the underlying meanings, contexts, and nuances of respondents’ choices rather than numerical value (Tanujaya et al., 2022). As qualitative data, each individual’s answer is examined in detail to understand what it signifies for that particular respondent. Likert-type items are appropriate to use to probe responses gathered from other data sources.

Analyzing the reasons behind the individual Likert-type item responses can provide deeper insights into and contexts for the attitudes and perceptions of respondents. A researcher might administer Likert-type items along with open-ended questions in surveys or interviews. After collecting additional qualitative data, e.g., interview transcripts, the researcher would review the Likert-type responses and associated qualitative data to get an overall sense of the patterns and themes, to delve deeper into the reasons behind a respondent’s Likert-type response (Li et al., 2023). This can reveal the motivations, attitudes, and feelings that led to the specific choice on the Likert-type item, to delve deeper into the reasons behind a respondent’s Likert-type response, analyzing responses for emerging patterns or themes without relying on numerical aggregation, similar to qualitative content analysis (Alabi & Jelili, 2023). Then, the researcher would code the individual Likert-type responses for key themes and patterns, potentially using qualitative data analysis software, e.g., NVivo, Atlas.ti, MaxQDA, to identify recurring themes or patterns.

Themes can emerge inductively from the data, providing insights into common factors influencing responses. Descriptions are created to capture the essence of each theme, illustrating how different respondents interpret and react to the same item, supplemented by illustrative direct quotes from respondents to exemplify specific themes, providing vivid illustrations of their perspectives. Following the coding, the researcher could interpret each response and the themes in the context of the respondents’ overall narratives and backgrounds, considering factors such as personal experiences, cultural background, and situational influences, and discuss how individual responses relate to broader trends and insights from the qualitative data.

Likert-type responses can be triangulated with other qualitative data sources such as interview transcripts, observational notes, or document analysis. This integration can provide a more holistic view of the research topic, enriching the analysis and interpretation of Likert-type responses. By focusing on narrative description, thematic analysis, comparative analysis, triangulation, and reflexive practices, researchers can gain rich, detailed insights into participants’ perspectives and experiences and reveal the deeper meanings and contexts behind each response.

Likert-type items facilitate the comparison of responses across different groups or over time (Doğan & Demirbolat, 2021). This comparability can be especially useful in mixed-methods research where qualitative insights need to be aligned with quantitative findings. The result would be a report of findings with rich descriptions, illustrative quotes, and contextual analysis and a discussion of the implications of the findings for the research question and broader field of study. Responses to individual Likert-type items can be useful in comparing responses across groups, e.g., demographic categories, experience levels, to explore variations in perceptions and attitudes. Understanding why different groups respond differently can highlight important contextual factors and social influences.

Summary

In conclusion, the use of Likert-type items in qualitative research offers a nuanced approach to exploring complex attitudes and perceptions, bridging the gap between quantitative precision and qualitative depth. While the traditional application of Likert scales has been predominantly quantitative, the integration of qualitative analysis allows for a richer understanding of the subjective meanings behind individual responses. By adopting a qualitative approach, researchers can leverage the structured nature of Likert type items to capture general trends while simultaneously delving into the contextual and thematic aspects of respondent choices. This dual approach not only enhances the interpretative richness of the data but also provides a more comprehensive framework for understanding the diverse factors influencing attitudes and perceptions. Ultimately, the flexibility of Liker-type items in qualitative research underscores their value as a versatile tool, capable of yielding insights that are deeply reflective of the human experience.


Dr. John Bryan is a university professor, editor, and dissertation chair. Bryan holds a BA in Chemistry from University of California, San Diego, an MBA in Operations and Marketing from Rutgers, the State University of New Jersey, and a DBA in Leadership from the University of Phoenix. 

Dr. Donna Graham is a university professor and dissertation chair. Graham holds a BA in Psychology and Education from Rosemont College, a MS in Counseling from Villanova University, a MEd in Educational Technology from Rosemont College, and a Doctorate in Philosophy from Capella University.  

References

Alabi, A. T., & Jelili, M. O. (2023). Clarifying likert scale misconceptions for improved application in urban studies. Quality & Quantity, 57(2), 1337–1350.

Doğan, E., & Demirbolat, A. O. (2021). Data-driven decision-making in schools scale: A study of validity and reliability. International Journal of Curriculum and Instruction, 13(1), 507–523.

Kleinheksel, A. J., & Summy, S. E. (2020). Establishing the psychometric properties of the EBPAS-36: A revised measure of evidence-based practice attitudes. Research in Social Work Practice, 30(5), 539–548.

Li, X., Li, Q., & Wang, Q. (2023). Analysis of college students’ misconceptions of quantitative research in social sciences in China: Implications for teaching. Journal of Education and Educational Research, 2(3), 28–31. https://doi.org/10.54097/jeer.v2i3.7140 

Likert, R. (1932). A technique for the measurement of attitudes. Arch Psychology, 22(140), 55.

Proudfoot, J. (2022). Inductive and deductive hybrid thematic analysis in mixed-methods research. Journal of Mixed Methods Research, 17(3), 308–326.

South, L., Saffo, D., Vitek, O., Dunne, C., & Borkin, M. A. (2022, June). Effective use of Likert scales in visualization evaluations: A systematic review. In Computer Graphics Forum, 41(3), 43–55. Wiley.

Tanujaya, B., Prahmana, R. C. I., & Mumu, J. (2022). Likert scale in social sciences research: Problems and difficulties. FWU Journal of Social Sciences, 16(4), 89–101.

Leave a Reply