This post is a record of grappling with web accessibility, public web services, and the responsibilities of developers from direct, hands-on experience.

Between law, technology, standards, and reality, I try to answer the question: “Are we really building for everyone?”


In the previous post, we confirmed that the Digital Inclusion Act asks “what is usable?”

Then this question remains:

If AI is a system that makes decisions? If users can’t understand automated decisions? How do we distinguish between content created by generative AI?

The answers to these questions come from “The Framework Act on the Development of Artificial Intelligence and Establishment of Trust-Based Infrastructure” (hereinafter referred to as the AI Basic Act), which takes effect on January 22, 2026, alongside the Digital Inclusion Act.


What a Web Accessibility Expert Felt Reading the AI Basic Act

For nearly 20 years working in web accessibility, I’ve repeatedly asked the same questions:

“Can all users access this screen?” “Can all users understand this information?” “Is this service excluding certain groups?”

When I first read the AI Basic Act, my first thought was:

“It’s the same question after all.”

The questions web accessibility asked of websites the AI Basic Act now asks of AI systems.

  • Information Accessibility: “Can users know that AI is being used?” (Transparency)
  • Cognitive Accessibility: “Can users understand AI decisions?” (Explainability)
  • Fair Access: “Does AI judgment exclude certain groups?” (Fairness)

In this post, from the perspective of a web accessibility professional, I will interpret the AI Basic Act as a new dimension of accessibility.


1. The Fourth Dimension of Accessibility: AI Systems

From Web Accessibility to AI Accessibility

In our work with web accessibility, we have addressed three dimensions of accessibility:

  1. Physical Accessibility: Can users access the screen?
  2. Technical Accessibility: Does it work across diverse environments?
  3. Cognitive Accessibility: Can users understand the content?

With the AI era comes a fourth dimension:

  1. Decision Accessibility: Can users understand and respond to AI decisions?
AI decision-making and judgment
AI decision-making and judgment
Photo by Growtika / Unsplash

The AI Basic Act precisely addresses this “decision accessibility”.

Background of the AI Basic Act

  • National Assembly passage: December 26, 2024 (260 votes in favor)
  • Promulgation: January 21, 2025
  • Effective date: January 22, 2026

Integrating 19 bills through bipartisan agreement, the Act simultaneously pursues “AI industry development” and “trust-building infrastructure”.

Similar to web accessibility legislation:

  • 2008 Disability Discrimination Act: “Websites must be accessible to people with disabilities”
  • 2026 AI Basic Act: “AI systems must be transparent and explainable”

Just as web accessibility was unfamiliar 15 years ago, AI transparency seems unfamiliar now. But ultimately, we’re walking the same path.


2. Reading the AI Basic Act Through WCAG Principles

The four core principles of web accessibility (WCAG) align remarkably well with the AI Basic Act.

This is no coincidence. The essence of accessibility remains the same regardless of technological change.

Principle 1: Perceivable

The WCAG Question:

“Can users perceive that information exists?”

In Web Accessibility:

  • Alt text for images
  • Captions for videos
  • Information not conveyed by color alone

In the AI Basic Act: Transparency Requirement (Article 31)

“Prior disclosure of the use of high-impact AI or generative AI”

Accessibility Perspective:

Just as alt text tells users "this is an image,"
AI labeling tells users "this is AI-generated"

<img src="photo.jpg" alt="Sunset at mountain peak">
→ "This is an image"

<img src="ai.jpg" alt="AI-generated sunset at mountain peak">
<span class="ai-badge">🤖 AI Generated</span>
→ "This is an AI-generated image"

The core is the same: enabling users to know “what” they’re seeing.


Principle 2: Operable

The WCAG Question:

“Can users operate all functions?”

In Web Accessibility:

  • All functions usable via keyboard alone
  • Sufficient time provided
  • No mouse required

In the AI Basic Act: Rights to Choice and Control

Combined with the Digital Inclusion Act (Article 21):

  • Right to reject AI decisions
  • Right to request human intervention
  • Non-digital alternative means

Accessibility Perspective:

Web AccessibilityAI Accessibility
Usable without mouseService usable without AI
Keyboard alternativesHuman counselor alternative
Time adjustmentsAI decision review requests

The core is the same: giving users control.


Principle 3: Understandable

The WCAG Question:

“Can users understand information and operations?”

In Web Accessibility:

  • Clear error messages
  • Predictable behavior
  • Readable text

In the AI Basic Act: Explainability

While “explainability” isn’t explicitly stated in the law, AI Impact Assessment (Article 35) requires:

  • What fundamental rights are affected
  • How those rights are affected
  • What mitigation measures exist

Accessibility Perspective:

Web AccessibilityAI Accessibility
“Password must be 8+ characters”“Loan was rejected because…”
“Step 1 of 3”“AI’s reasoning was…”
Cause of error clearly statedDecision process explained

The core is the same: enabling users to understand “why”.


Principle 4: Robust

The WCAG Question:

“Does it work reliably across diverse environments?”

In Web Accessibility:

  • Compatibility across browsers
  • Screen reader support
  • Compatibility with future technologies

In the AI Basic Act: Safety Assurance (Article 32)

For large-scale AI models (10^26 FLOPs or more):

  • Risk identification, assessment, and mitigation
  • Safety incident monitoring
  • Risk management system establishment

Accessibility Perspective:

Web AccessibilityAI Accessibility
Works across multiple browsersTesting across diverse user scenarios
Assistive technology supportBias monitoring
Standards complianceSafety mechanisms

The core is the same: ensuring safety and reliability for all users.


3. Understanding Core Concepts Through an Accessibility Lens

The AI Basic Act doesn’t regulate all AI. Just as web accessibility prioritizes based on “importance,” the AI Basic Act differentiates responsibility based on “impact”.

① High-Impact AI - The AI Version of “Core Functions”

“Core Functions” in Web Accessibility:

  • Login, payment, civil complaint submission
  • Stricter standards applied
  • Alternative means mandatory

“High-Impact AI” in the AI Basic Act:

Legal definition (Article 2, Paragraph 4):

“AI that significantly affects or risks affecting human life, body, or fundamental rights”

Application fields:

  • Hiring, loan review
  • Medical diagnosis
  • Student evaluation
  • Public service eligibility determination

Assessment criteria:

  • Does it affect people’s lives?
  • Does incorrect judgment violate fundamental rights?
  • Is the decision irreversible?

Comparison with Web Accessibility:

CategoryWeb AccessibilityAI Basic Act
General ContentInformation pagesRecommendation, search AI
Core FunctionsLogin, paymentHiring, loan AI
Application StandardMore strictSpecial obligations
Alternative MeansMandatoryHuman review pathway

② Generative AI - The AI Version of “Content Labeling”

“Alt Text” in Web Accessibility:

  • Describes what the image is
  • Distinguishes content type
  • Information for screen reader users

“Generative AI Labeling” in the AI Basic Act:

Legal definition (Article 2, Paragraph 5):

“AI that generates text, sound, images, video, etc. based on input data”

Mandatory requirements:

  • Clearly mark AI-generation fact
  • Make distinguishable from real content
  • Consider user age and conditions

Extension of Web Accessibility Principles:

html
<!-- Web Accessibility: Image description -->
<img src="photo.jpg" alt="Sunset landscape at mountain peak">

<!-- AI Era: Generative AI Labeling -->
<div class="content">
  <img src="ai-image.jpg" alt="AI-generated sunset landscape at mountain peak">
  <span class="ai-badge" aria-label="AI-generated content">
    🤖 AI Generated
  </span>
</div>

Core principle remains: enabling users to recognize content type.


③ Safety Assurance Target - The AI Version of “Standards Compliance”

“Standards Compliance” in Web Accessibility:

  • Public institutions: mandatory
  • Large enterprises: recommended
  • Small organizations: voluntary

“Safety Assurance” in the AI Basic Act:

Criterion (Presidential Decree Article 23):

Cumulative computation for training of 10^26 FLOPs or more

Implication:

  • Applies only to mega-scale AI models
  • GPT-4, Claude level
  • Most general AI not applicable

Scale and Responsibility Proportionality:

Web AccessibilityAI Basic Act
Public institutions mandatoryMega-scale AI mandatory
Large enterprises recommendedHigh-impact AI recommended
Small organizations voluntaryGeneral AI voluntary

Core: responsibility proportional to impact and scale.

Growing AI impact and responsibility across fields
Growing AI impact and responsibility across fields
AI's expanding impact and responsibility across diverse sectors. (Created by Nanobanana)

4. Practical Guide for Accessibility Professionals

Developers experienced in web accessibility checks can approach AI accessibility checks the same way.

Phase 1: Service Diagnosis (Similar to Web Accessibility Step 1)

Web Accessibility Diagnosis:

□ Images present? → alt needed
□ Videos present? → captions needed
□ Keyboard operable?

AI Accessibility Diagnosis:

□ Using AI?
  └─ Yes → AI labeling needed

□ What type of AI?
  ├─ Generative (text/image/audio/video)
  ├─ Decision-making (screening/evaluation/judgment)
  └─ Recommendation/prediction

□ High-impact AI?
  ├─ Affects hiring/termination?
  ├─ Affects loan/credit evaluation?
  ├─ Affects medical/educational decisions?
  └─ Significantly affects fundamental rights?

Phase 2: Ensuring Transparency (Web Accessibility’s “Perceivable”)

Web Accessibility Checklist:

□ Alt attributes on all images
□ Captions on videos
□ Information not conveyed by color alone

AI Transparency Checklist:

 Disclose AI use
  ├─ Location: homepage, terms of service
  ├─ Method: "This service uses AI"
  └─ Scope: specifically state AI's functions

 Additional generative AI labeling
  ├─ Watermark or badge
  ├─ AI metadata
  └─ Maintain info upon download

 Verify perceivability
  ├─ Can elderly users understand?
  ├─ Screen reader accessible?
  └─ Distinguishable to children?

Violation: Fine up to 30 million won (1+ year grace period)


Phase 3: Ensuring Explainability (Web Accessibility’s “Understandable”)

Web Accessibility Checklist:

□ Clear error messages
□ Input format guidance
□ Progress indication

AI Explainability Checklist:

□ Prepare AI decision explanations
  ├─ Can answer "why this result?"
  ├─ Present key factors
  └─ Show data scope

□ Provide user control
  ├─ Option to reject AI decision
  ├─ Request human review pathway
  └─ Appeal procedure guidance

□ Error response system
  ├─ Report incorrect judgments
  ├─ Correction request process
  └─ Remedy options

Essential for high-impact AI


Phase 4: Ensuring Inclusion (Integration with Digital Inclusion Act)

Web Accessibility + Digital Inclusion Checklist:

□ Physical accessibility
  ├─ Keyboard navigation
  ├─ Screen reader compatibility
  └─ Color contrast

□ Cognitive accessibility
  ├─ Intuitive UI
  ├─ Clear guidance
  └─ Easy-to-understand language

□ Alternative means
  ├─ Non-AI options
  ├─ Phone/email counseling
  ├─ Offline visiting
  └─ Human assistance requests

Practical Scenario: AI Hiring System Review

Step 1: Diagnosis

✓ AI use: Yes (interview evaluation)
✓ Type: Decision-making AI
✓ High-impact AI: Yes (hiring decision)
→ Special obligations apply

Step 2: Transparency

✓ Notice: "This interview uses AI technology"
✓ Scope: "AI analyzes voice and answer content"
✓ Limitation: "Final decision reviewed by hiring manager"

Step 3: Explainability

✓ Criteria: Logic, communication, job fit
✓ Feedback: Key evaluation points provided with results
✓ Appeal: Manager re-review request pathway

Step 4: Inclusion

✓ Accessibility: keyboard, screen reader support
✓ Usability: practice mode, mid-process saving
✓ Alternatives: phone interview option
✓ Testing: pre-validation with disabled/elderly users
AI system involved in interview process
AI system involved in interview process
Conducting interviews with AI participation. (Created by Nanobanana)

5. Integration with Digital Inclusion Act: Complete Accessibility

The two laws aren’t separate— they work together for complete accessibility.

Scenario 1: AI-Based Hiring System

Question 1: Can people access it? (Physical accessibility)
├─ Digital Inclusion Act: All users can access?
└─ AI Basic Act: Prior notice of AI interviews?

Question 2: Can people understand it? (Cognitive accessibility)
├─ Digital Inclusion Act: Non-digital-skilled users can use?
└─ AI Basic Act: AI evaluation criteria explainable?

Question 3: Is it fair? (Fair access)
├─ Digital Inclusion Act: Alternatives (phone, visit)?
└─ AI Basic Act: Unbiased evaluation?

→ Accessible, understandable, fair AI hiring

Scenario 2: Generative AI Content Platform

Question 1: Can people distinguish? (Perceivable)
├─ Digital Inclusion Act: Intuitive UI?
└─ AI Basic Act: Clear AI generation marking?

Question 2: Can people operate? (Operable)
├─ Digital Inclusion Act: Clear error guidance?
└─ AI Basic Act: Watermark, metadata handling?

→ Distinguishable, operable AI platform

Scenario 3: Public AI Chatbot

Question 1: Can everyone use it?
├─ Digital Inclusion Act: Alternative paths (phone, visit) required
└─ AI Basic Act: Disclose AI use

Question 2: Can people trust it?
├─ Digital Inclusion Act: Easy-to-understand language
└─ AI Basic Act: Accountability for misinformation

Question 3: Can it improve?
├─ Digital Inclusion Act: Regular usability assessment
└─ AI Basic Act: High-impact AI verification

→ Accessible, trustworthy, improvable public service
Expert reviewing AI results
Expert reviewing AI results
Photo by Vitaly Gariev / Unsplash

6. Preparation Roadmap: Like Web Accessibility Improvement

Meaning of Grace Period

Ministry of Science and ICT:

“Defer fines by at least 1+ years”

Actual timeline:

2026.1.22  Effective date
2026~2027  Grace period (warnings/recommendations)
2027~      Full enforcement (fines)

Comparison with Web Accessibility:

  • 2008 Disability Discrimination Act: 2-year grace period
  • 2010 onwards: Full enforcement began
  • Now: standardized practice

AI Basic Act follows the same path:

  • Now is the preparation window
  • Full enforcement from 2027
  • Will become the standard

Step-by-Step Preparation (Same as Web Accessibility Improvement)

First Half 2026: Diagnosis

Like web accessibility checks:
1. Current state assessment (AI use?)
2. Type classification (generative/high-impact/general)
3. Gap analysis (current vs. legal requirements)

Second Half 2026: Implementation

Like web accessibility improvement:
1. Add transparency disclosures
2. Build explanation systems
3. Provide alternatives

2027 and Beyond: Maintenance

Like web accessibility management:
1. Regular checks (quarterly)
2. Incorporate user feedback
3. Continuous improvement

7. Global Context

Korea’s AI Basic Act’s Position

Diverse team checking AI fairness
Diverse team checking AI fairness
Diverse experts examining AI fairness. (Created by Nanobanana)

EU AI Act

  • Risk-based classification
  • Safety threshold: 10^25 FLOPs

Korea’s AI Basic Act

  • Development + minimum regulation
  • Safety threshold: 10^26 FLOPs (more relaxed)
  • 1+ year grace period

Approach:

  • EU: Human rights protection priority
  • Korea: Industry development + trust

Note: Detailed comparison with global AI regulations and accessibility will be covered in a future series. We plan to compare Korea’s law with the EU AI Act’s accessibility requirements, state-level US bills’ inclusion provisions, and more.


8. Conclusion - Evolution of Accessibility

Same Questions, New Tools

Twenty years ago, web accessibility was unfamiliar.

“Why add alt text?” “Why must keyboard navigation work?”

Now it’s standard.

AI accessibility is walking the same path.

“Why disclose AI use?” “Why explain AI decisions?”

In 15 years, it will be standard too.


Accessibility’s Essence Doesn’t Change

What Web Accessibility Asked:

  • Can users perceive?
  • Can users operate?
  • Can users understand?
  • Is it safe?

What AI Accessibility Asks:

  • Can users perceive? (Transparency)
  • Can users operate? (Choice)
  • Can users understand? (Explainability)
  • Is it safe? (Fairness)

The same question.


The Accessibility Professional’s Role

Developers experienced in web accessibility won’t find the AI Basic Act unfamiliar.

We already know:

  • How to find who’s excluded
  • How to create alternatives
  • How to make everything usable

If you understand WCAG’s 4 principles:

  • You can understand AI transparency
  • You can implement AI explainability
  • You can assess AI fairness

The principles stay the same even as technology changes.

It’s time to apply web accessibility principles learned in the web era to AI.

Perhaps accessibility professionals are precisely the people needed most in the AI era.


Questions Continue

The question that started in web accessibility expanded to digital inclusion and now reaches AI.

“Can people access?” “Can people use?” “Can people trust?”

Three questions are one.

As developers, we’re still building the same thing.

Technology for everyone.

Only the tools have changed.


Series Complete

This post concludes the 〈Evolution of Accessibility〉 series.

  1. [Beyond Accessibility to Digital Inclusion - The Beginning of a New Era]
  2. [Beyond Technology, Toward People – Understanding the Digital Inclusion Act]
  3. [Beyond Regulation, Toward Trust - The AI Basic Act and Accessibility]