Skip to Content

Unlocking Self-Discovery: The Power of Formative Question Types for Meaningful Self-Assessment

July 29, 2025 by
Unlocking Self-Discovery: The Power of Formative Question Types for Meaningful Self-Assessment
Nova Mentis, Llewellyn Craddock

In our journey of personal and professional growth, one of the most powerful tools at our disposal is self-assessment. But how do we accurately gauge where we truly stand today, so we can chart a course toward where we want to be tomorrow? The answer lies in asking the right questions—formative questions that capture not just a snapshot but a dynamic, evolving picture of our skills, confidence, and maturity.

Why Formative Self-Assessment Matters

Unlike summative tests which offer a final grade or certification, formative self-assessments focus on ongoing growth. They invite honest reflection, encouraging us to recognize small improvements and areas for development. This process of continuous feedback helps us stay aligned with our goals and adapt our strategies in real time.

To make this work, questions must be carefully crafted to elicit meaningful insights without feeling threatening or overwhelming. Below, we explore key types of questions you can use in your self-assessment toolkit to uncover your current reality and move toward your desired future.

Key Question Types for Formative Self-Assessment

  1. Likert Scale Questions: Measuring Attitudes and Confidence
    These ask you to express agreement or disagreement with statements about your feelings or beliefs. For example:
    “I feel confident troubleshooting technical problems.”
    Using a 5- or 7-point scale from “Strongly Disagree” to “Strongly Agree,” these questions help you quantify your confidence or attitude toward specific skills or tasks.
  2. Semantic Differential Questions: Capturing Nuance in Perception
    These use pairs of opposite adjectives to describe how you perceive something. For example:
    “My coding skills are: Easy ←→ Difficult”
    This format allows you to express subtle feelings about your abilities without a leading statement bias, revealing the polarity of your experience.
  3. Behavioural Frequency Questions: Tracking Actual Practice
    These questions focus on how often you perform specific actions within a defined timeframe. For instance:
    “In the past 7 days, how many times did you review your project documentation?”
    Providing concrete reference periods improves recall accuracy and helps you see patterns in your behaviour.
  4. Self-Efficacy Questions: Judging Your Capability Under Challenge
    Here, you rate your perceived ability to perform tasks under varying conditions. Example:
    “Rate your ability to write secure SQL queries under time pressure.”
    Using a numeric scale (e.g., 0–10), these questions help you assess not just whether you can do something, but how confident you feel doing it in realistic scenarios.
  5. Checklist with Confidence Ratings: Combining Knowledge and Certainty
    This approach asks you to identify which skills or knowledge you possess and then rate how confident you are in each. For example:
    “Select the Linux commands you can execute without reference AND rate your confidence (0–4) in each.”
    This dual-layered method gives richer data by blending objective recall with subjective certainty.
  6. Maturity Ladder Questions: Assessing Stage-Based Progression
    These ask you to select descriptions that best represent your current level of process or skill maturity. For example:
    “Which statement best reflects your documentation process?” with options ranging from “Ad-hoc and informal” to “Standardized and optimized.”
    This helps you see where you stand along a developmental continuum and what the next stage looks like.
  7. Behaviourally Anchored Rating Scales (BARS): Anchoring Assessments in Observable Actions
    These involve choosing statements that describe concrete behaviours aligned with performance levels. For example:
    “Select the statement that best describes your peer-review practices.”
    Each choice corresponds to a specific, observable behaviour, making self-assessment more objective and actionable.

How to Begin Measuring Your Current Reality

  1. Define Your Desired Reality Clearly
    Start by identifying the skills, confidence levels, milestones or maturity stages you aspire to reach. This clarity will guide the selection of formative questions that are relevant and meaningful.
  2. Select Question Types Aligned with Your Goals
    Use attitude questions like Likert scales to understand your confidence, behavioural frequencies to track actions, and maturity ladders to pinpoint developmental stages.
  3. Be Specific and Contextual
    Frame questions with clear context, action, target, and time (the C-A-T-T principle) to get precise and actionable insights.
  4. Use Quantifiable Scales
    Opt for multi-point scales that allow you to detect subtle changes over time, supporting a nuanced view of your growth trajectory.
  5. Reflect and Act on Your Responses
    Treat your self-assessment as a starting point for growth—identify strengths to build on and areas to develop, then set concrete next steps.

Final Thoughts

Formative self-assessment is not about judgment, it’s about discovery and progress. By thoughtfully incorporating diverse question types into your self-evaluation, you create a powerful mirror reflecting your current reality and illuminating the path forward. Next time you want to understand where you stand and how to grow, remember: the questions you ask yourself hold the key.


References

  1. Carnegie Mellon University. (n.d.). Formative and summative assessments. https://www.cmu.edu/teaching/assessment/basics/formative-summative.html
  2. Poorvu Center for Teaching and Learning, Yale University. (n.d.). Formative & summative assessments. https://poorvucenter.yale.edu/Formative-Summative-Assessments
  3. Qualtrics. (n.d.). Measuring behavioral frequency. https://www.qualtrics.com/blog/measuring-behavioral-frequency/
  4. Blair, J., & Burton, R. (1986). Measuring behavioral frequency: Questionnaire design considerations. http://www.asasrms.org/Proceedings/papers/1986_090.pdf
  5. Motamem. (2020). Self-efficacy scales and assessment. https://www.motamem.org/wp-content/uploads/2020/01/self-efficacy.pdf
  6. SurveyMonkey. (n.d.). Likert scale basics. https://www.surveymonkey.com/mp/likert-scale/
  7. Imperial College London. (n.d.). Best practice in questionnaire design. https://www.imperial.ac.uk/research-and-innovation/education-research/evaluation/tools-and-resources-for-evaluation/questionnaires/best-practice-in-questionnaire-design/
  8. Heinrich, R. J. (n.d.). Construct validity in survey research. https://www.linkedin.com/pulse/construct-validity-survey-research-richard-james-heinrich-ph-d-
  9. Oliver, P. (2015). Measurement validity. https://users.ssc.wisc.edu/~oliver/wp/wp-content/uploads/2015/07/Measurement-ValidityBW.pdf
  10. Tavakol, M., & Dennick, R. (2011). Making sense of Cronbach’s alpha. International Journal of Medical Education, 2, 53-55. https://pmc.ncbi.nlm.nih.gov/articles/PMC10810057/
  11. Blasberg, A., et al. (2016). Effects of item wording on reliability. https://cruxpsychology.ca/wp-content/uploads/2017/07/Blasbergetal.2016.702-717.pdf
  12. PubMed. (2023). Cognitive load from negations in survey items. https://pubmed.ncbi.nlm.nih.gov/39541592/
  13. American Association for Public Opinion Research. (2014). Recall accuracy in frequency questions. https://academic.oup.com/poq/article/57/4/552/1849596
  14. Scribd. (n.d.). Likert scale best practices. https://www.scribd.com/document/182075652/Likert-Scale-Best-Practices-docx
  15. Conjointly. (n.d.). How to develop Likert questions. https://conjointly.com/blog/how-to-develop-likert-questions/
  16. Clemson University. (n.d.). Likert-type scale response anchors. https://media.clemson.edu/cbshs/prtm/research/resources-for-research-page-2/Vagias-Likert-Type-Scale-Response-Anchors.pdf
  17. Simply Psychology. (n.d.). Semantic differential scale. https://www.simplypsychology.org/semantic-differential.html
  18. Paperform. (n.d.). Semantic differential scale explained. https://paperform.co/blog/semantic-differential-scale/
  19. Alchemer. (n.d.). Measuring attitudes with semantic differential questions. https://www.alchemer.com/resources/blog/how-to-measure-attitudes-with-semantic-differential-questions/
  20. Ottawa Hospital Research Institute. (n.d.). Decision self-efficacy manual. https://decisionaid.ohri.ca/docs/develop/user_manuals/UM_decision_selfefficacy.pdf
  21. AIHR. (n.d.). Behaviorally anchored rating scale guide. https://www.aihr.com/blog/behaviorally-anchored-rating-scale/
  22. Indeed Career Guide. (n.d.). Behaviorally anchored rating scales explained. https://www.indeed.com/career-advice/career-development/behaviorally-anchored-rating-scales
  23. Engagedly. (n.d.). Behaviorally anchored rating scale: A complete guide. https://engagedly.com/blog/behaviourally-anchored-rating-scale-a-complete-guide/
  24. PerformYard. (n.d.). What are behaviorally anchored rating scales? https://www.performyard.com/articles/what-are-behaviorally-anchored-rating-scales-bars
  25. Linford & Co. (n.d.). Security maturity models. https://linfordco.com/blog/security-maturity-models/
  26. Pearson Higher Education. (n.d.). Capabilities maturity models. https://www.pearsonhighered.com/assets/samplechapter/0/2/0/1/0201604450.pdf
  27. The Institute of Internal Auditors. (n.d.). Selecting and using maturity models. https://www.theiia.org/globalassets/documents/content/articles/guidance/practice-guides/selecting-using-and-creating-maturity-models/pg-maturity-models.pdf
  28. Deloitte. (n.d.). Considerations for maturity model selection. https://www2.deloitte.com/content/dam/Deloitte/us/Documents/public-sector/us-public-sector-considerations-for-maturity-model-selection-v3.pdf
  29. SmartSurvey. (n.d.). Technical skills training assessment template. https://www.smartsurvey.co.uk/templates/surveys/training/technical-skills-training-assessment-survey-template
  30. Frontiers in Psychology. (2025). Ajzen’s compatibility principle and question specificity. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1446798/pdf
  31. Statistics Solutions. (n.d.). Anchors of a scale. https://www.statisticssolutions.com/apa-fact-of-the-week-anchors-of-a-scale/
  32. TASO. (n.d.). Designing Likert scales. https://taso.org.uk/evidence/evaluation-guidance-resources/survey-design-resources/evaluation-guidance-designing-likert-scales/
  33. Edyoucated. (n.d.). Skill proficiency scale glossary. https://edyoucated.org/en-us/glossary/skill-proficiency-scale
  34. SuperSurvey. (n.d.). Skill assessment survey. https://www.supersurvey.com/LPF-skill-assessment-survey
  35. SurveyKing. (n.d.). Semantic differential scale. https://www.surveyking.com/help/semantic-differential-scale
  36. TutorialsPoint. (n.d.). CMMI maturity levels. https://www.tutorialspoint.com/cmmi/cmmi-maturity-levels.htm
  37. Frontiers in Education. (2023). Validity evidence in formative assessments. https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2023.1306532/full
  38. FasterCapital. (n.d.). Semantic differential vs. Likert scale analysis. https://fastercapital.com/content/Semantic-Differential–The-Difference-in-Details–Semantic-Differential-vs–Likert-Scale-Analysis.html
  39. LimeSurvey. (n.d.). Self-assessment surveys. https://www.limesurvey.org/surveys/online-surveys/self-assessment-surveys

Unlocking Self-Discovery: The Power of Formative Question Types for Meaningful Self-Assessment
Nova Mentis, Llewellyn Craddock July 29, 2025
Share this post
Tags
Archive