Skip to content
Blog post thumbnail: Assertions: A New Unit for Accessibility Evaluation...

Assertions: A New Unit for Accessibility Evaluation

Introduction In Atomic Tests vs. Holistic Tests: A New Testing Methodology, we discussed balancing Atomic and Holistic tests. Now we need to address how we’ll “assert” and “document” these results. This is where Assertions come in. The scoring and conformance model covered in WCAG 3.0 Conformance Model: Changes After A/AA/AAA also connects with Assertions. This is because they provide a way to supplement areas not covered by quantitative tests with organizational processes and evidence. ...

Published date: 2026-02-01 · Reading time: 2 min · Word count: 792 words · Author: Isaac
Blog post thumbnail: Atomic Tests vs. Holistic Tests: A New Testing Approach - Compare WCAG 3.0’s quantitative/qualitative testing flow with WCAG 2.2’s success-criteria model. We cover Atomic/Holistic concepts, view/process scope, and how tests connect to scoring from a practical perspective. (https://www.codeslog.com/en/posts/wcag-3-atomic-holistic-tests/)

Atomic Tests vs. Holistic Tests: A New Testing Approach

Introduction When people hear “accessibility testing,” they often think of a checklist: “Does this button have alternative text?” “Is the contrast ratio high enough?” WCAG 2.2 is built around clear pass/fail checks like these. WCAG 3.0 moves toward a broader unit of evaluation, aiming to consider overall user experience quality. That shift naturally changes how we test. We now combine fine-grained checks (Atomic) with real-world contextual evaluation (Holistic). This shift directly connects to the score-based conformance model discussed in WCAG 3.0 Conformance Model: Beyond A/AA/AAA. The weight you give each test type can change the score and the level you reach. ...

Published date: 2026-01-26 · Reading time: 4 min · Word count: 734 words · Author: Isaac
Blog post thumbnail: WCAG 3.0 Conformance Model: Moving Beyond A/AA/AAA - Based on the WCAG 3.0 Editor's Draft, this post summarizes the Foundational/Supplemental/Assertions model and how it differs from WCAG 2.2. (https://www.codeslog.com/en/posts/wcag-3-scoring-conformance/)

WCAG 3.0 Conformance Model: Moving Beyond A/AA/AAA

Introduction In the previous post, we looked at why WCAG 3.0 reorganized “Success Criteria” into “Outcomes.” This post focuses on changes to the conformance model. WCAG 3.0 aims for a different approach than WCAG 2.x, but it is still in the Editor’s Draft (2026-01-05) stage and is not finalized. Conformance levels, scoring approaches, and evaluation methods are still being explored. Important: This post is based on the WCAG 3.0 Editor’s Draft (2026-01-05). The draft can change at any time, and the document itself is explicitly marked as a work in progress. ...

Published date: 2026-01-25 · Reading time: 6 min · Word count: 1110 words · Author: Isaac
Blog post thumbnail: WCAG 3.0 Structure Anatomy: From Success Criteria to Outcomes - Deep dive into WCAG 3.0's revolutionary structural changes. Learn how the shift from checklists to user experience-centered evaluation works with practical examples. (https://www.codeslog.com/en/posts/wcag-3-structure-outcomes/)

WCAG 3.0 Structure Anatomy: From Success Criteria to Outcomes

Introduction “1.3.1 Info and Relationships - Level A” If you’ve worked with WCAG 2.2, you’re familiar with this format of Success Criteria. Numbers, levels, and clear test conditions. This structure has been the standard for web accessibility for over 15 years. However, as we explored in the previous article, this approach had limitations. “Websites that pass checkboxes but are actually unusable” are proof of this. WCAG 3.0 has completely redesigned the structure itself to address this issue. It didn’t just add items—it changed the way we evaluate accessibility. ...

Published date: 2026-01-19 · Reading time: 17 min · Word count: 3507 words · Author: Isaac
Blog post thumbnail: Questions I Asked at the AI Public Service Evaluation - My experience as a citizen evaluator for an AI agent competition. Recording the questions I asked about accessibility, failure response, and communication design. (https://www.codeslog.com/en/posts/ai-evaluation-review/)

Questions I Asked at the AI Public Service Evaluation

Introduction I just returned from the AI Agent Scenario Competition Citizen Evaluation Panel held in Seoul. The panel consisted of expert judges and citizen evaluators, with the final scores reflecting both expert evaluation and citizen evaluation. Since the evaluation details are confidential, I can’t discuss individual teams or specific results. However, through this experience, I was able to clearly define what standards I use when evaluating AI public services. So in this post, rather than “how this team performed,” I want to record what questions I asked and why I thought those questions mattered. ...

Published date: 2025-12-31 · Reading time: 7 min · Word count: 1378 words · Author: Isaac
Blog post thumbnail: A $35 Evaluation Panel — Why I'm Still Going to Seoul - My experience joining an AI agent competition evaluation panel. Why a seemingly losing choice can plant seeds for the future. (https://www.codeslog.com/en/posts/evaluation-panel-seoul/)

A $35 Evaluation Panel — Why I'm Still Going to Seoul

Introduction Let me be honest — this choice is a losing deal on paper. I’m spending a whole day, paying for transportation out of my own pocket, and the compensation is only about $35 (50,000 KRW). Yet, I’m heading to Seoul tomorrow. I wanted to organize my reasons step by step. View from a train window - a journey to somewhere Photo: Dieter K / Unsplash The Evaluation Panel Offer and My Decision One day, while browsing the NIA (National Information Society Agency) website as usual, I discovered the Public Institution Website Citizen Evaluation Panel. Since then, I’ve been grateful to participate in the public web/app citizen evaluation activities for two years straight. The rewards and achievements may seem small, but the process and experience have been building up as personal assets. ...