
The impact ofartificial intelligence on workers includes both applications to improveworker safety and health, and potentialhazards that must be controlled.
One potential application is using AI toeliminate hazards by removing humans from hazardous situations that involve risk ofstress,overwork, ormusculoskeletal injuries.Predictive analytics may also be used to identify conditions that may lead to hazards such asfatigue,repetitive strain injuries, ortoxic substance exposure, leading to earlier interventions. Another is to streamlineworkplace safety and health workflows through automating repetitive tasks, enhancing safety training programs throughvirtual reality, or detecting and reportingnear misses.
When used in the workplace, AI also presents the possibility of new hazards. These may arise frommachine learning techniques leading to unpredictable behavior andinscrutability in their decision-making, or fromcybersecurity andinformation privacy issues. Many hazards of AI arepsychosocial due to its potential to cause changes in work organization. These include changes in the skills required of workers,[1] increasedmonitoring leading tomicromanagement, algorithms unintentionally or intentionallymimicking undesirable human biases, andassigning blame for machine errors to the human operator instead. AI may also lead tophysical hazards in the form ofhuman–robot collisions, andergonomic risks of control interfaces and human–machine interactions. Hazard controls include cybersecurity and information privacy measures, communication and transparency with workers about data usage, and limitations oncollaborative robots.
From a workplace safety and health perspective, only"weak" or "narrow" AI that is tailored to a specific task is relevant, as there are many examples that are currently in use or expected to come into use in the near future."Strong" or "general" AI is not expected to be feasible in the near future,[according to whom?] and discussion ofits risks is within the purview of futurists and philosophers rather thanindustrial hygienists.
Certain digital technologies are predicted to result in job losses. Starting in the 2020s, the adoption of modern robotics has led to net employment growth. However, many businesses anticipate that automation, or employingrobots would result in job losses in the future. This is especially true for companies inCentral andEastern Europe.[2][3][4] Other digital technologies, such asplatforms orbig data, are projected to have a more neutral impact on employment.[2][4] A large number of tech workers have been laid off starting in 2023;[5] many such job cuts have been attributed to artificial intelligence.[6][7]
In order for any potential AI health and safety application to be adopted, it requires acceptance by both managers and workers.[8] For example, worker acceptance may be diminished by concerns aboutinformation privacy,[9] or from a lack of trust and acceptance of the new technology, which may arise from inadequate transparency or training.[10]: 26–28, 43–45 Alternatively, managers may emphasize increases ineconomic productivity rather than gains in worker safety and health when implementing AI-based systems.[11]

AI may increase the scope of work tasks where a worker can beremoved from a situation that carries risk.[12] In a sense, while traditional automation can replace the functions of a worker's body with a robot, AI effectively replaces the functions of their brain with a computer. Hazards that can be avoided include stress, overwork, musculoskeletal injuries, and boredom.[13]: 5–7
This can expand the range of affected job sectors intowhite-collar andservice sector jobs such as in medicine, finance, and information technology.[14] As an example,call center workers face extensive health and safety risks due to its repetitive and demanding nature and its high rates of micro-surveillance. AI-enabledchatbots lower the need for humans to perform the most basic call center tasks.[13]: 5–7

Machine learning is used forpeople analytics to make predictions about worker behavior to assist management decision-making, such as hiring and performance assessment. These could also be used to improve worker health. The analytics may be based on inputs such as online activities, monitoring of communications, location tracking, andvoice analysis andbody language analysis of filmed interviews. For example,sentiment analysis may be used to spot fatigue to preventoverwork.[13]: 3–7 Decision support systems have a similar ability to be used to, for example, preventindustrial disasters or makedisaster response more efficient.[17]
For manualmaterial handling workers,predictive analytics and artificial intelligence may be used to reducemusculoskeletal injury. Traditional guidelines are based on statistical averages and are geared towardsanthropometrically typical humans. The analysis of large amounts of data from wearable sensors may allow real-time, personalized calculation ofergonomic risk andfatigue management, as well as better analysis of the risk associated with specific job roles.[9]
Wearable sensors may also enable earlier intervention against exposure to toxic substances than is possible with area or breathing zone testing on a periodic basis. Furthermore, the large data sets generated could improveworkplace health surveillance,risk assessment, and research.[17]
AI can also be used to make theworkplace safety and health workflow more efficient.[18] Digital assistants, like Amazon Alexa, Google Assistant, and Apple Siri, are increasingly adopted in workplaces to enhance productivity by automating routine tasks. These AI-based tools can manage administrative duties, such as scheduling meetings, sending reminders, processing orders, and organizing travel plans. This automation can improve workflow efficiency by reducing time spent on repetitive tasks, thus supporting employees to focus on higher-priority responsibilities.[19] Digital assistants are especially valuable in streamlining customer service workflows, where they can handle basic inquiries, reducing the demand on human employees.[19] However, there remain challenges in fully integrating these assistants due to concerns over data privacy, accuracy, and organizational readiness.[19]
One example iscoding ofworkers' compensation claims, which are submitted in a prose narrative form and must manually be assigned standardized codes. AI is being investigated to perform this task faster, more cheaply, and with fewer errors.[20][21]
AI‐enabledvirtual reality systems may be useful for safety training for hazard recognition.[17]
Artificial intelligence may be used to more efficiently detectnear misses. Reporting and analysis of near misses are important in reducing accident rates, but they are often underreported because they are not noticed by humans, or are not reported by workers due to social factors.[22]

There are several broad aspects of AI that may give rise to specific hazards. The risks depend on implementation rather than the mere presence of AI.[13]: 2–3
Systems using sub-symbolic AI such asmachine learning may behave unpredictably and are more prone toinscrutability in their decision-making. This is especially true if a situation is encountered that was not part of the AI'straining dataset, and is exacerbated in environments that are less structured. Undesired behavior may also arise from flaws in the system'sperception (arising either from within the software or fromsensor degradation),knowledge representation and reasoning, or fromsoftware bugs.[10]: 14–18 They may arise from improper training, such as a user applying the same algorithm to two problems that do not have the same requirements.[13]: 12–13 Machine learning applied during the design phase may have different implications than that applied atruntime. Systems usingsymbolic AI are less prone to unpredictable behavior.[10]: 14–18
The use of AI also increasescybersecurity risks relative to platforms that do not use AI,[10]: 17 andinformation privacy concerns about collected data may pose a hazard to workers.[9]
Psychosocial hazards are those that arise from the way work is designed, organized, and managed, or its economic and social contexts, rather than arising from a physical substance or object. They cause not onlypsychiatric and psychological outcomes such asoccupational burnout,anxiety disorders, anddepression, but they can also cause physical injury or illness such ascardiovascular disease ormusculoskeletal injury.[23] Many hazards of AI are psychosocial in nature due to its potential to cause changes in work organization, in terms of increasing complexity and interaction between different organizational factors. However, psychosocial risks are often overlooked by designers of advanced manufacturing systems.[11]
Einola and Khoreva explore how different organizational groups perceive and interact with AI technologies.[24] Their research shows that successful AI integration depends on human ownership and contextual understanding. They caution against blind technological optimism and stress the importance of tailoring AI use to specific workplace ecosystems. This perspective reinforces the need for inclusive design and transparent implementation strategies.Einola, Katja; Khoreva, Violetta (2023)."Best Friend or Broken Tool? Exploring the Co-existence of Humans and Artificial Intelligence in the Workplace Ecosystem".Human Resource Management.62 (1):117–135.doi:10.1002/hrm.22147.
AI is expected to lead to changes in the skills required of workers, requiringtraining of existing workers, flexibility, and openness to change.[1] The requirement for combining conventional expertise with computer skills may be challenging for existing workers.[11] Over-reliance on AI tools may lead todeskilling of some professions.[17]
While AI offers convenience and judgement-free interaction, increased reliance—particularly among Generation Z—may reduce interpersonal communication in the workplace and affect social cohesion.[25] As AI becomes a substitute for traditional peer collaboration and mentorship, there is a risk of diminishing opportunities for interpersonal skill development andteam-based learning.[26] This shift could contribute to workplace isolation and changes in team dynamics.[27]
Increased monitoring may lead tomicromanagement and thus to stress and anxiety.[28] A perception ofsurveillance may also lead to stress. Controls for these include consultation with worker groups, extensive testing, and attention to introduced bias.Wearable sensors,activity trackers, andaugmented reality may also lead to stress from micromanagement, both for assembly line workers andgig workers. Gig workers also lack the legal protections and rights of formal workers.[13]: 2–10
AI is not merely a technical tool but a transformative force that reshapes workplace structures and decision-making processes. Newell and Marabelli argue that AI alters power dynamics and employee autonomy, requiring a more nuanced understanding of its social and organizational implications. Their study calls for thoughtful integration of AI that considers its broader impact on work culture and human roles.Newell, Sue; Marabelli, Marco (2021)."Artificial Intelligence and the Changing Nature of Work".Academy of Management Discoveries.7 (4):521–536.doi:10.5465/amd.2019.0103.
There is also the risk of people being forced to work at a robot's pace, or to monitor robot performance at nonstandard hours.[13]: 5–7
Algorithms trained on past decisions may mimic undesirable humanbiases, for example, pastdiscriminatory hiring and firing practices.Information asymmetry between management and workers may lead to stress, if workers do not have access to the data or algorithms that are the basis for decision-making.[13]: 3–5
In addition to building a model with inadvertently discriminatory features, intentional discrimination may occur through designing metrics that covertly result in discrimination throughcorrelated variables in a non-obvious way.[13]: 12–13
In complex human‐machine interactions, some approaches toaccident analysis may be biased to safeguard a technological system and its developers byassigning blame to the individual human operator instead.[17]

Physical hazards in the form of human–robot collisions may arise from robots using AI, especially collaborative robots (cobots). Cobots are intended to operate in close proximity to humans[29], which makes impossible the common hazard control ofisolating the robot using fences or other barriers, which is widely used for traditionalindustrial robots.Automated guided vehicles are a type of cobot that as of 2019 are in common use, often asforklifts orpallet jacks inwarehouses orfactories.[10]: 5, 29–30 For cobots, sensor malfunctions or unexpected work environment conditions can lead to unpredictable robot behavior and thus to human–robot collisions.[13]: 5–7
Self-driving cars are another example of AI-enabled robots. In addition, theergonomics of control interfaces and human–machine interactions may give rise to hazards.[11]
AI, in common with other computational technologies, requirescybersecurity measures to stop software breaches and intrusions,[10]: 17 as well asinformation privacy measures.[9] Communication and transparency with workers about data usage is a control for psychosocial hazards arising from security and privacy issues.[9] Proposed best practices for employer‐sponsored worker monitoring programs include using only validated sensor technologies; ensuring voluntary worker participation; ceasing data collection outside the workplace; disclosing all data uses; and ensuring secure data storage.[17]
For industrial cobots equipped with AI‐enabled sensors, theInternational Organization for Standardization (ISO) recommended: (a) safety‐related monitored stopping controls; (b) human hand guiding of the cobot; (c) speed and separation monitoring controls; and (d) power and force limitations. Networked AI-enabled cobots may share safety improvements with each other.[17] Human oversight is another general hazard control for AI.[13]: 12–13
Both applications and hazards arising from AI can be considered as part of existing frameworks foroccupational health and safety risk management. As with all hazards, risk identification is most effective and least costly whendone in the design phase.[11]
Workplace health surveillance, the collection and analysis of health data on workers, is challenging for AI because labor data are often reported in aggregate and does not provide breakdowns between different types of work, and is focused on economic data such as wages and employment rates rather than skill content of jobs. Proxies for skill content include educational requirements and classifications of routine versus non-routine, and cognitive versus physical jobs. However, these may still not be specific enough to distinguish specific occupations that have distinct impacts from AI. TheUnited States Department of Labor'sOccupational Information Network is an example of a database with a detailed taxonomy of skills. Additionally, data are often reported on a national level, while there is much geographical variation, especially between urban and rural areas.[14]
AI systems in the workplace raise ethical concerns related to privacy, fairness, human dignity, and transparency. According to the OECD, these risks must be addressed through robust governance frameworks and accountability mechanisms. Ethical deployment of AI requires clear policies on data usage, explainability of algorithms, and safeguards against discrimination and surveillance."Using Artificial Intelligence in the Workplace: What Are the Main Ethical Risks?". OECD. 2022. Retrieved13 August 2025.
As of 2019[update], ISO was developing astandard on the use of metrics anddashboards, information displays presenting company metrics for managers, in workplaces. The standard is planned to include guidelines for both gathering data and displaying it in a viewable and useful manner.[13]: 11 [30][31]
In theEuropean Union, theGeneral Data Protection Regulation, while oriented towards consumer data, is also relevant for workplace data collection. Data subjects, including workers, have "the right not to be subject to a decision based solely on automated processing". Other relevantEU directives include theMachinery Directive (2006/42/EC), theRadio Equipment Directive (2014/53/EU), and theGeneral Product Safety Directive (2001/95/EC).[13]: 10, 12–13
The National Conference of State Legislatures (NCSL) highlights how U.S. federal and state governments are responding to AI’s growing role in employment. Legislative efforts focus onregulating employee surveillance, mitigating bias in hiring and performance evaluations, and addressing job displacement. The report also discusses initiatives to upskill workers and promote equitable AI adoption."Artificial Intelligence in the Workplace: The Federal and State Legislative Landscape". NCSL. 2024. Retrieved13 August 2025.