How Employers Can Leverage AI Assistants to Build Accessibility and Inclusion
ChatGPT is one of the most popular internet apps ever. Launched in 2022, it reached one million users in five days. After Images for ChatGPT was released in March 2025, one million users were added in one hour [1]. Today, many AI (artificial intelligence) assistants are available, and chances are you’ve already used one.
In this post, we describe AI-powered assistants and explore the ways they can be leveraged to build workplace accessibility and inclusion.
About AI Assistants
AI-powered assistants use large language models and machine learning to understand the context of requests and to generate original responses. Examples of AI assistants include ChatGPT by OpenAI, Microsoft Copilot, Google Gemini, Claude by Anthropic (image of welcome screen on the right), and many others.
Free versions usually have a limit on the number of interactions you can have. That number is not usually available because it can vary, for example based on total demand for that day.

Image of welcome screen for free version of Claude by Anthropic.
Where does their information come from?
The information provided by AI-powered assistants can come from a mix of publicly available content on the internet, licensed data, and information and patterns learned during training. Legal discussions and lawsuits are ongoing in Canada and around the world about the scraping of personal and public data from the internet [2].
During interactions with a user, most AI assistants use pre-existing information with which they were trained. This information is supplemented as needed by a real-time search on the internet—which, in some cases, must be enabled by the user.
Is their information reliable?
The information provided by AI assistants may not always be up to date and may contain mistakes (often called “hallucinations”) and biases. Bias and discrimination related to disability, race, gender, gender identity, and other demographics can result from the skewed data used to train AI systems and the lack of diverse AI developers who can recognize and mitigate the built-in biases. Even the computational methods and algorithms used to verify that data sets are bias-free cannot account for the socio-cultural or ethical complexities within information [3].
Information provided by an AI assistant should always be verified for bias and accuracy.
You can also ask the AI assistant how recent its knowledge base is and what its sources are. You can ask it to check the information in real-time online, particularly for legislation, statistics, facts, research, or other time-relevant information.
Privacy of Personal Data
Most publicly available AI assistants do not have access to any personal information unless you choose to share it with them during an interaction. Many AI companies state that users should not share personal data during interactions, and that data from interactions is not collected nor used to train their AI.
To ensure the privacy and protection of personal data, names and personal details should not be shared with a publicly available AI assistant and should always be removed from documents uploaded during interactions. When deciding which AI-powered tools to use, consider companies focused on security and privacy.
Provide digital literacy training for all staff and implement policies around the use of AI to ensure the privacy of personal data and the proper and ethical use of AI tools.
AI Assistants Cannot Replace HR or Service Providers
AI-assisted tools can enhance but not replace professional insights and decisions. While AI assistants can understand context, nuance, and complex information, they generate responses by accessing the limited information on which they’ve been trained and through algorithmic reasoning and pattern detection.
One of the dangers of AI is that it not only mirrors real-world prejudices but plays a part in perpetuating systemic inequality. Human intervention is crucial to assess, question, and enhance information provided by AI assistants.
AI responses should always be vetted by a qualified person who can check their accuracy and who may consider additional factors, such as professional insights, meanings, values, equity, bias, and individual considerations for personalization.
How AI Assistants Can Support Workplace Inclusion
Organizations can leverage AI assistants in many ways to support their workplace accessibility and inclusion efforts, for example for the following tasks.
Keep in mind that the information generated by an AI assistant may be general and will have to be tailored to workplace and worker circumstances. Think of it as a starting point.
Develop Strategies and a Plan:
- Analyze accessibility compliance gaps by reviewing existing policies against standards in your region.
- Generate customized accessibility audit checklists for specific industries or business types.
- Create methods to collect staff input, such as anonymous feedback forms or surveys.
- Create implementation roadmaps with prioritized action items and timelines.
- Draft accessibility statements, policies, and procedures tailored to your needs and to local regulations and standards.
- Research and summarize relevant regulations and best practices.

Credit: CoWomen – Unsplash
HR and Recruitment Support:
- Review job descriptions and job postings to minimize ableist and biased phrasing and to ensure inclusive language.
- Create customizable templates, for instance for inclusion plans and accommodation request workflows (that are then tailored to each request).
- Analyze job task demands and propose inclusive modifications, with staff input.
- Assist with integrating inclusion in employee handbook and training materials, after consultation has taken place with staff.
- Analyze compensation data for equity gaps.
- Develop inclusive recruitment processes, such as suggesting accessible interview processes, designing questions that reduce bias, or creating alternative assessment formats (for instance that are task-based rather than written or verbal).
Employee Support and Training:
- Generate scripts to practice conversations about disclosure or accommodations. Practice role playing them with the AI assistant.
- Help troubleshoot and explore accommodation solutions.
- Provide real-time guidance on accessible document creation and communication, and create tailored documents, for instance that are screen-reader friendly.
- Develop FAQ resources for managers around inclusive practices or for supporting employees experiencing disability.
- Create step-by-step guides for using assistive technologies.
- Break down complex tasks and create plain language checklists, step-by-step instructions, or time management plans, either in text or visual format.
Implementation Assistance:
- Audit your website and digital content for accessibility.
- Generate alt-text suggestions for images and media (which may have to be adapted for your context).
- Create accessible, plain language versions of existing documents and presentations.
- Provide a checklist of accessible practices for in-person and virtual meetings and events.
Ongoing Monitoring:
- Track progress against inclusion metrics and generate reports.
- Analyze employee feedback for accessibility and inclusion concerns, and list potential next steps.
- Stay up to date on changing regulations and standards.
- Provide reminders for regular accessibility reviews and updates.
Consultation and Co-Creation Are Key
AI assistants can be powerful tools to support inclusion and accessibility, but human guidance is the key to success. Regular consultations and co-creation with employees, job seekers, and persons with lived experience can 1) check for biases and discrimination, and 2) help tailor information and processes to ensure they’re relevant and effective for your workplace.
Digital training can provide a foundation for staff around AI, safety, and ethical considerations. Consider creating an AI committee to assist with building policies, processes, consultation, and monitoring. Encourage diverse representation to reduce bias and discrimination.
Use AI as a starting point for efficiency, but always centre human voices, expertise, and experiences in processes and decision-making.
Additional Resources
- Guide on the use of generative artificial intelligence (Government of Canada)
- Cyber security guidance on generative artificial intelligence (AI) (Canadian Centre for Cyber Security)
- AI chatbots and the workplace: risks and best practices for employers (Torys LLP – April 2023)
- Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy (Multiple authors around the world – August 2023)
- The Top 11 ChatGPT Alternatives You Can Try Today (datacamp – January 2025) (Note that some of the pros and cons are outdated. For the most up-to-date information, check with individual companies.)
References
1. Kylie Robison. ChatGPT “added one million users in the last hour.” The Verge. March 31, 2025. Accessed May 29, 2025.
2. Kirsten Thompson and George Hua. Data scraping under fire: What Canadian companies can learn from KASPR’s €240K fine. Dentons Data. Published March 4, 2025. Accessed June 4, 2025.
3. Xavier Ferrer. Bias and Discrimination in AI: A Cross-Disciplinary Perspective. Technology and Society. Published August 7, 2021. Accessed June 4, 2025.