Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Workplace impact of artificial intelligence

From Wikipedia, the free encyclopedia
Impact of artificial intelligence on workers
A close up of a person's neck and upper torso, with a black rectangular sensor and camera unit attached to their shirt collar
AI-enabledwearable sensor networks may improveworker safety and health through access to real-time, personalized data, but also presentspsychosocial hazards such asmicromanagement, a perception ofsurveillance, andinformation security concerns.

The impact ofartificial intelligence on workers includes both applications to improveworker safety and health, and potentialhazards that must be controlled.

One potential application is using AI toeliminate hazards by removing humans from hazardous situations that involve risk ofstress,overwork, ormusculoskeletal injuries.Predictive analytics may also be used to identify conditions that may lead to hazards such asfatigue,repetitive strain injuries, ortoxic substance exposure, leading to earlier interventions. Another is to streamlineworkplace safety and health workflows through automating repetitive tasks, enhancing safety training programs throughvirtual reality, or detecting and reportingnear misses.

When used in the workplace, AI also presents the possibility of new hazards. These may arise frommachine learning techniques leading to unpredictable behavior andinscrutability in their decision-making, or fromcybersecurity andinformation privacy issues. Many hazards of AI arepsychosocial due to its potential to cause changes in work organization. These include changes in the skills required of workers,[1] increasedmonitoring leading tomicromanagement, algorithms unintentionally or intentionallymimicking undesirable human biases, andassigning blame for machine errors to the human operator instead. AI may also lead tophysical hazards in the form ofhuman–robot collisions, andergonomic risks of control interfaces and human–machine interactions. Hazard controls include cybersecurity and information privacy measures, communication and transparency with workers about data usage, and limitations oncollaborative robots.

From a workplace safety and health perspective, only"weak" or "narrow" AI that is tailored to a specific task is relevant, as there are many examples that are currently in use or expected to come into use in the near future."Strong" or "general" AI is not expected to be feasible in the near future,[according to whom?] and discussion ofits risks is within the purview of futurists and philosophers rather thanindustrial hygienists.

Certain digital technologies are predicted to result in job losses. Starting in the 2020s, the adoption of modern robotics has led to net employment growth. However, many businesses anticipate that automation, or employingrobots would result in job losses in the future. This is especially true for companies inCentral andEastern Europe.[2][3][4] Other digital technologies, such asplatforms orbig data, are projected to have a more neutral impact on employment.[2][4] A large number of tech workers have been laid off starting in 2023;[5] many such job cuts have been attributed to artificial intelligence.[6][7]

Health and safety applications

[edit]

In order for any potential AI health and safety application to be adopted, it requires acceptance by both managers and workers.[8] For example, worker acceptance may be diminished by concerns aboutinformation privacy,[9] or from a lack of trust and acceptance of the new technology, which may arise from inadequate transparency or training.[10]: 26–28, 43–45  Alternatively, managers may emphasize increases ineconomic productivity rather than gains in worker safety and health when implementing AI-based systems.[11]

Eliminating hazardous tasks

[edit]
A large room with a suspended ceiling packed with cubicles containing computer monitors
Call centers involve significantpsychosocial hazards due to surveillance and overwork. AI-enabledchatbots can remove workers from the most basic and repetitive of these tasks.

AI may increase the scope of work tasks where a worker can beremoved from a situation that carries risk.[12] In a sense, while traditional automation can replace the functions of a worker's body with a robot, AI effectively replaces the functions of their brain with a computer. Hazards that can be avoided include stress, overwork, musculoskeletal injuries, and boredom.[13]: 5–7 

This can expand the range of affected job sectors intowhite-collar andservice sector jobs such as in medicine, finance, and information technology.[14] As an example,call center workers face extensive health and safety risks due to its repetitive and demanding nature and its high rates of micro-surveillance. AI-enabledchatbots lower the need for humans to perform the most basic call center tasks.[13]: 5–7 

Analytics to reduce risk

[edit]
A drawing of a man lifting a weight onto an apparatus, with various distances marked
The NIOSH lifting equation[15][16] is calibrated for a typical healthy worker to avoidback injuries, but AI-based methods may instead allow real-time, personalized calculation of risk.

Machine learning is used forpeople analytics to make predictions about worker behavior to assist management decision-making, such as hiring and performance assessment. These could also be used to improve worker health. The analytics may be based on inputs such as online activities, monitoring of communications, location tracking, andvoice analysis andbody language analysis of filmed interviews. For example,sentiment analysis may be used to spot fatigue to preventoverwork.[13]: 3–7 Decision support systems have a similar ability to be used to, for example, preventindustrial disasters or makedisaster response more efficient.[17]

For manualmaterial handling workers,predictive analytics and artificial intelligence may be used to reducemusculoskeletal injury. Traditional guidelines are based on statistical averages and are geared towardsanthropometrically typical humans. The analysis of large amounts of data from wearable sensors may allow real-time, personalized calculation ofergonomic risk andfatigue management, as well as better analysis of the risk associated with specific job roles.[9]

Wearable sensors may also enable earlier intervention against exposure to toxic substances than is possible with area or breathing zone testing on a periodic basis. Furthermore, the large data sets generated could improveworkplace health surveillance,risk assessment, and research.[17]

Streamlining safety and health workflows

[edit]

AI can also be used to make theworkplace safety and health workflow more efficient.[18] Digital assistants, like Amazon Alexa, Google Assistant, and Apple Siri, are increasingly adopted in workplaces to enhance productivity by automating routine tasks. These AI-based tools can manage administrative duties, such as scheduling meetings, sending reminders, processing orders, and organizing travel plans. This automation can improve workflow efficiency by reducing time spent on repetitive tasks, thus supporting employees to focus on higher-priority responsibilities.[19] Digital assistants are especially valuable in streamlining customer service workflows, where they can handle basic inquiries, reducing the demand on human employees.[19] However, there remain challenges in fully integrating these assistants due to concerns over data privacy, accuracy, and organizational readiness.[19]

One example iscoding ofworkers' compensation claims, which are submitted in a prose narrative form and must manually be assigned standardized codes. AI is being investigated to perform this task faster, more cheaply, and with fewer errors.[20][21]

AI‐enabledvirtual reality systems may be useful for safety training for hazard recognition.[17]

Artificial intelligence may be used to more efficiently detectnear misses. Reporting and analysis of near misses are important in reducing accident rates, but they are often underreported because they are not noticed by humans, or are not reported by workers due to social factors.[22]

Hazards

[edit]
A drawing showing a back rectangular solid labeled "blackbox", with an arrow entering labeled "input/stimulus", and an arrow exiting labeled "output/response"
Somemachine learning training methods are prone to unpredictabiliy andinscrutability in their decision-making, which can lead to hazards if managers or workers cannot predict or understand an AI-based system's behavior.

There are several broad aspects of AI that may give rise to specific hazards. The risks depend on implementation rather than the mere presence of AI.[13]: 2–3 

Systems using sub-symbolic AI such asmachine learning may behave unpredictably and are more prone toinscrutability in their decision-making. This is especially true if a situation is encountered that was not part of the AI'straining dataset, and is exacerbated in environments that are less structured. Undesired behavior may also arise from flaws in the system'sperception (arising either from within the software or fromsensor degradation),knowledge representation and reasoning, or fromsoftware bugs.[10]: 14–18  They may arise from improper training, such as a user applying the same algorithm to two problems that do not have the same requirements.[13]: 12–13  Machine learning applied during the design phase may have different implications than that applied atruntime. Systems usingsymbolic AI are less prone to unpredictable behavior.[10]: 14–18 

The use of AI also increasescybersecurity risks relative to platforms that do not use AI,[10]: 17  andinformation privacy concerns about collected data may pose a hazard to workers.[9]

Psychosocial

[edit]
Introduction of new AI-enabled technologies may lead to changes in work practices that carrypsychosocial hazards such as a need forretraining or fear oftechnological unemployment.

Psychosocial hazards are those that arise from the way work is designed, organized, and managed, or its economic and social contexts, rather than arising from a physical substance or object. They cause not onlypsychiatric and psychological outcomes such asoccupational burnout,anxiety disorders, anddepression, but they can also cause physical injury or illness such ascardiovascular disease ormusculoskeletal injury.[23] Many hazards of AI are psychosocial in nature due to its potential to cause changes in work organization, in terms of increasing complexity and interaction between different organizational factors. However, psychosocial risks are often overlooked by designers of advanced manufacturing systems.[11]

Einola and Khoreva explore how different organizational groups perceive and interact with AI technologies.[24] Their research shows that successful AI integration depends on human ownership and contextual understanding. They caution against blind technological optimism and stress the importance of tailoring AI use to specific workplace ecosystems. This perspective reinforces the need for inclusive design and transparent implementation strategies.Einola, Katja; Khoreva, Violetta (2023)."Best Friend or Broken Tool? Exploring the Co-existence of Humans and Artificial Intelligence in the Workplace Ecosystem".Human Resource Management.62 (1):117–135.doi:10.1002/hrm.22147.

Changes in work practices

[edit]

AI is expected to lead to changes in the skills required of workers, requiringtraining of existing workers, flexibility, and openness to change.[1] The requirement for combining conventional expertise with computer skills may be challenging for existing workers.[11] Over-reliance on AI tools may lead todeskilling of some professions.[17]

While AI offers convenience and judgement-free interaction, increased reliance—particularly among Generation Z—may reduce interpersonal communication in the workplace and affect social cohesion.[25] As AI becomes a substitute for traditional peer collaboration and mentorship, there is a risk of diminishing opportunities for interpersonal skill development andteam-based learning.[26] This shift could contribute to workplace isolation and changes in team dynamics.[27]

Increased monitoring may lead tomicromanagement and thus to stress and anxiety.[28] A perception ofsurveillance may also lead to stress. Controls for these include consultation with worker groups, extensive testing, and attention to introduced bias.Wearable sensors,activity trackers, andaugmented reality may also lead to stress from micromanagement, both for assembly line workers andgig workers. Gig workers also lack the legal protections and rights of formal workers.[13]: 2–10 

AI is not merely a technical tool but a transformative force that reshapes workplace structures and decision-making processes. Newell and Marabelli argue that AI alters power dynamics and employee autonomy, requiring a more nuanced understanding of its social and organizational implications. Their study calls for thoughtful integration of AI that considers its broader impact on work culture and human roles.Newell, Sue; Marabelli, Marco (2021)."Artificial Intelligence and the Changing Nature of Work".Academy of Management Discoveries.7 (4):521–536.doi:10.5465/amd.2019.0103.

There is also the risk of people being forced to work at a robot's pace, or to monitor robot performance at nonstandard hours.[13]: 5–7 

Bias

[edit]
Main article:Algorithmic bias

Algorithms trained on past decisions may mimic undesirable humanbiases, for example, pastdiscriminatory hiring and firing practices.Information asymmetry between management and workers may lead to stress, if workers do not have access to the data or algorithms that are the basis for decision-making.[13]: 3–5 

In addition to building a model with inadvertently discriminatory features, intentional discrimination may occur through designing metrics that covertly result in discrimination throughcorrelated variables in a non-obvious way.[13]: 12–13 

In complex human‐machine interactions, some approaches toaccident analysis may be biased to safeguard a technological system and its developers byassigning blame to the individual human operator instead.[17]

Physical

[edit]
A yellow rectangular wheeled forklift robot in a warehouse, with stacks of boxes visible and additional similar robots visible behind it
Automated guided vehicles are examples ofcobots currently in common use. Use of AI to operate these robots may affect the risk ofphysical hazards such as the robot or its moving parts colliding with workers.

Physical hazards in the form of human–robot collisions may arise from robots using AI, especially collaborative robots (cobots). Cobots are intended to operate in close proximity to humans[29], which makes impossible the common hazard control ofisolating the robot using fences or other barriers, which is widely used for traditionalindustrial robots.Automated guided vehicles are a type of cobot that as of 2019 are in common use, often asforklifts orpallet jacks inwarehouses orfactories.[10]: 5, 29–30  For cobots, sensor malfunctions or unexpected work environment conditions can lead to unpredictable robot behavior and thus to human–robot collisions.[13]: 5–7 

Self-driving cars are another example of AI-enabled robots. In addition, theergonomics of control interfaces and human–machine interactions may give rise to hazards.[11]

Hazard controls

[edit]

AI, in common with other computational technologies, requirescybersecurity measures to stop software breaches and intrusions,[10]: 17  as well asinformation privacy measures.[9] Communication and transparency with workers about data usage is a control for psychosocial hazards arising from security and privacy issues.[9] Proposed best practices for employer‐sponsored worker monitoring programs include using only validated sensor technologies; ensuring voluntary worker participation; ceasing data collection outside the workplace; disclosing all data uses; and ensuring secure data storage.[17]

For industrial cobots equipped with AI‐enabled sensors, theInternational Organization for Standardization (ISO) recommended: (a) safety‐related monitored stopping controls; (b) human hand guiding of the cobot; (c) speed and separation monitoring controls; and (d) power and force limitations. Networked AI-enabled cobots may share safety improvements with each other.[17] Human oversight is another general hazard control for AI.[13]: 12–13 

Risk management

[edit]

Both applications and hazards arising from AI can be considered as part of existing frameworks foroccupational health and safety risk management. As with all hazards, risk identification is most effective and least costly whendone in the design phase.[11]

Workplace health surveillance, the collection and analysis of health data on workers, is challenging for AI because labor data are often reported in aggregate and does not provide breakdowns between different types of work, and is focused on economic data such as wages and employment rates rather than skill content of jobs. Proxies for skill content include educational requirements and classifications of routine versus non-routine, and cognitive versus physical jobs. However, these may still not be specific enough to distinguish specific occupations that have distinct impacts from AI. TheUnited States Department of Labor'sOccupational Information Network is an example of a database with a detailed taxonomy of skills. Additionally, data are often reported on a national level, while there is much geographical variation, especially between urban and rural areas.[14]

AI systems in the workplace raise ethical concerns related to privacy, fairness, human dignity, and transparency. According to the OECD, these risks must be addressed through robust governance frameworks and accountability mechanisms. Ethical deployment of AI requires clear policies on data usage, explainability of algorithms, and safeguards against discrimination and surveillance."Using Artificial Intelligence in the Workplace: What Are the Main Ethical Risks?". OECD. 2022. Retrieved13 August 2025.

Standards and regulation

[edit]
Main article:Regulation of artificial intelligence

As of 2019[update], ISO was developing astandard on the use of metrics anddashboards, information displays presenting company metrics for managers, in workplaces. The standard is planned to include guidelines for both gathering data and displaying it in a viewable and useful manner.[13]: 11 [30][31]

In theEuropean Union, theGeneral Data Protection Regulation, while oriented towards consumer data, is also relevant for workplace data collection. Data subjects, including workers, have "the right not to be subject to a decision based solely on automated processing". Other relevantEU directives include theMachinery Directive (2006/42/EC), theRadio Equipment Directive (2014/53/EU), and theGeneral Product Safety Directive (2001/95/EC).[13]: 10, 12–13 

The National Conference of State Legislatures (NCSL) highlights how U.S. federal and state governments are responding to AI’s growing role in employment. Legislative efforts focus onregulating employee surveillance, mitigating bias in hiring and performance evaluations, and addressing job displacement. The report also discusses initiatives to upskill workers and promote equitable AI adoption."Artificial Intelligence in the Workplace: The Federal and State Legislative Landscape". NCSL. 2024. Retrieved13 August 2025.

See also

[edit]

References

[edit]
  1. ^ab"Impact of AI on Jobs: Jobocalypse on the Horizon?". 14 July 2023.
  2. ^abEIB (2022-05-05).Digitalisation in Europe 2021-2022: Evidence from the EIB Investment Survey. European Investment Bank.ISBN 978-92-861-5233-7.
  3. ^Parschau, Christian; Hauge, Jostein (2020-10-01)."Is automation stealing manufacturing jobs? Evidence from South Africa's apparel industry".Geoforum.115:120–131.doi:10.1016/j.geoforum.2020.07.002.ISSN 0016-7185.S2CID 224877507.
  4. ^abGenz, Sabrina (2022-05-05)."The nuanced relationship between cutting-edge technologies and jobs: Evidence from Germany".Brookings. Retrieved2022-06-05.
  5. ^Allyn, Bobby (2024-01-28)."Nearly 25,000 tech workers were laid off in the first weeks of 2024. Why is that?". NPR. Retrieved27 November 2024.
  6. ^Ahirwar, Shalley (2025-09-03)."AI in Jobs & Workplace – The Workplace Automation". INTELLIGENCEINSIGHTS.NET. Retrieved17 September 2025.
  7. ^Cerullo, Megan (2024-01-25)."Tech companies are slashing thousands of jobs as they pivot toward AI". CBS. Retrieved27 November 2024.
  8. ^Jakub, Fiegler-Rudol; Karolina, Lau; Alina, Mroczek; Janusz, Kasperczyk (2025-01-30)."Exploring Human–AI Dynamics in Enhancing Workplace Health and Safety: A Narrative Review".Int. J. Environ. Res. Public Health.22 (2): 199.doi:10.3390/ijerph22020199.PMC 11855051.PMID 40003424.
  9. ^abcdeGianatti, Toni-Louise (2020-05-14)."How AI-Driven Algorithms Improve an Individual's Ergonomic Safety".Occupational Health & Safety. Retrieved2020-07-30.
  10. ^abcdefJansen, Anne; van der Beek, Dolf; Cremers, Anita; Neerincx, Mark; van Middelaar, Johan (2018-08-28)."Emergent risks to workplace safety: working in the same space as a cobot".Netherlands Organisation for Applied Scientific Research (TNO). Retrieved2020-08-12.
  11. ^abcdeBadri, Adel; Boudreau-Trudel, Bryan; Souissi, Ahmed Saâdeddine (2018-11-01)."Occupational health and safety in the industry 4.0 era: A cause for major concern?".Safety Science.109:403–411.doi:10.1016/j.ssci.2018.06.012.hdl:10654/44028.S2CID 115901369.
  12. ^P N, Lee; M, Lee; A M, Haq; E B, Longton; V, Wright (1974-03-01)."Periarthritis of the shoulder. Trial of treatments investigated by multivariate analysis".Ann Rheum Dis.33 (2):116–119.doi:10.1136/ard.33.2.116.PMC 1006221.PMID 4595273.
  13. ^abcdefghijklmMoore, Phoebe V. (2019-05-07)."OSH and the Future of Work: benefits and risks of artificial intelligence tools in workplaces".EU-OSHA. Retrieved2020-07-30.
  14. ^abFrank, Morgan R.; Autor, David; Bessen, James E.; Brynjolfsson, Erik; Cebrian, Manuel; Deming, David J.; Feldman, Maryann; Groh, Matthew; Lobo, José; Moro, Esteban; Wang, Dashun (2019-04-02)."Toward understanding the impact of artificial intelligence on labor".Proceedings of the National Academy of Sciences.116 (14):6531–6539.Bibcode:2019PNAS..116.6531F.doi:10.1073/pnas.1900949116.ISSN 0027-8424.PMC 6452673.PMID 30910965.
  15. ^Warner, Emily; Hudock, Stephen D.; Lu, Jack (2017-08-25)."NLE Calc: A Mobile Application Based on the Revised NIOSH Lifting Equation".NIOSH Science Blog. Retrieved2020-08-17.
  16. ^"Applications manual for the revised NIOSH lifting equation".U.S. National Institute for Occupational Safety and Health. 1994-01-01.doi:10.26616/NIOSHPUB94110.
  17. ^abcdefgHoward, John (2019-11-01)."Artificial intelligence: Implications for the future of work".American Journal of Industrial Medicine.62 (11):917–926.doi:10.1002/ajim.23037.ISSN 0271-3586.PMID 31436850.S2CID 201275028.
  18. ^Mia, Barnes (2025-06-05)."The Role of Predictive Analytics in Optimizing Health and Safety Workflows".OHS. Retrieved2025-09-07.
  19. ^abcJackson, Stephen; Panteli, Niki (2024-10-10)."AI-Based Digital Assistants in the Workplace: An Idiomatic Analysis".Communications of the Association for Information Systems.55 (1):627–653.doi:10.17705/1CAIS.05524.ISSN 1529-3181.
  20. ^Meyers, Alysha R. (2019-05-01)."AI and Workers' Comp".NIOSH Science Blog. Retrieved2020-08-03.
  21. ^Webb, Sydney; Siordia, Carlos; Bertke, Stephen; Bartlett, Diana; Reitz, Dan (2020-02-26)."Artificial Intelligence Crowdsourcing Competition for Injury Surveillance".NIOSH Science Blog. Retrieved2020-08-03.
  22. ^Ferguson, Murray (2016-04-19)."Artificial Intelligence: What's To Come for EHS… And When?".EHS Today. Retrieved2020-07-30.
  23. ^Brun, Emmanuelle; Milczarek, Malgorzata (2007)."Expert forecast on emerging psychosocial risks related to occupational safety and health". European Agency for Safety and Health at Work. RetrievedSeptember 3, 2015.
  24. ^Katja, Einola; Violetta, Khoreva (2022-10-05)."Best friend or broken tool? Exploring the co-existence of humans and artificial intelligence in the workplace ecosystem".Special Issue:The Ecosystem of Work and Organization: Theoretical Frameworks and Future Directions.66 (1):117–135.doi:10.1002/hrm.22147.
  25. ^Keykot, Amritha (June 26, 2025)."Why Gen Z Is Spending More Time With AI Than Real People".Entrepreneur India. RetrievedJune 27, 2025.
  26. ^Yusuf, Samuel; Abubakar, Justina; Durodola, Remilekun (2024)."Impact of AI on continuous learning and skill development in the workplace: A comparative study with traditional methods".World Journal of Advanced Research and Reviews.23 (2):1129–1140.doi:10.30574/wjarr.2024.23.2.2439. RetrievedJune 26, 2025.
  27. ^Bakhtiari, Kian (Jul 28, 2023)."Gen-Z, The Loneliness Epidemic And The Unifying Power Of Brands".Forbes. RetrievedJune 26, 2025.
  28. ^"'Why do I feel like somebody's watching me?' Workplace Surveillance Can Impact More Than Just Productivity".GAO. 2024-10-29. Retrieved2025-09-07.
  29. ^Manager, Scott Hubbard Senior Engineer/Product (2025-08-21)."Collaborative Robots 101: What You Need to Know About Cobots".CRX Collaborative Robot. Retrieved2025-10-16.
  30. ^Moore, Phoebe V. (2014-04-01)."Questioning occupational safety and health in the age of AI".Kommission Arbeitsschutz und Normung. Retrieved2020-08-06.
  31. ^"Standards by ISO/IEC JTC 1/SC 42 - Artificial intelligence".International Organization for Standardization. Retrieved2020-08-06.
Aspects ofworkplaces
Topics
See also
Templates
Retrieved from "https://en.wikipedia.org/w/index.php?title=Workplace_impact_of_artificial_intelligence&oldid=1317181822"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp