![]() | |
Company type | Private |
---|---|
Predecessor | IBM Watson Health |
Founded | June 30, 2022; 2 years ago (2022-06-30) |
Headquarters | Ann Arbor, Michigan |
Key people |
|
Owner | Francisco Partners |
Number of employees | 1,000–5,000 |
Website | merative |
Merative L.P., formerlyIBM Watson Health, is an American medical technology company that provides products and services that help clients facilitatemedical research,clinical research,real world evidence, andhealthcare services, through the use ofartificial intelligence,data analytics,cloud computing, and other advanced information technology. Merative is owned byFrancisco Partners, an Americanprivate equity firm headquartered inSan Francisco, California. In 2022,IBM divested and spun-off their Watson Health division into Merative. As of 2023[update], it remains a standalone company headquartered inAnn Arbor with innovation centers in Ireland,Hyderabad,Bengaluru, andChennai.[1]
Thomson Healthcare was a division ofThomson Corporation until 2008, when, following Thomson's merger with Reuters, it became the healthcare unit ofThomson Reuters. On April 23, 2012, Thomson Reuters agreed to sell it toVeritas Capital for US$1.25 billion.[2] On June 6, 2012, the sale was finalized and the new company, Truven Health Analytics, became an independent organization solely focused on healthcare.[3]
IBM Corporation acquired Truven Health Analytics on February 18, 2016,[4] and merged it with IBM's Watson Health unit.[5] Truven Health Analytics provided comprehensive healthcare data and analytics services.[6] The company nameTruven is a portmanteau of the words "trusted" and "proven".[7]
In January 2022, IBM announced the sale of part of the Watson Health assets, including Truven toFrancisco Partners for a reported $1 billion.[8] On June 30, 2022, Francisco Partners announced the completion of acquiring Watson Health and launched a healthcare data company named Merative.[9][10]
Watson's natural language, hypothesis generation, and evidence-based learning capabilities are being investigated to see how Watson may contribute toclinical decision support systems, and the increase inartificial intelligence in healthcare for use bymedical professionals.[11] To aid physicians in the treatment of their patients, once a physician has posed a query to the system describing symptoms and other related factors, Watson first parses the input to identify the most important pieces of information; then mines patient data to find facts relevant to the patient's medical andhereditary history; then examines available data sources to form and testhypotheses;[11] and finally provides a list of individualized, confidence-scored recommendations.[12] The sources ofdata that Watson uses foranalysis can include treatment guidelines, electronicmedical record data, notes from healthcare providers, research materials, clinical studies, journal articles and patient information.[11] Despite being developed and marketed as a "diagnosis and treatment advisor", Watson has never been actually involved in the medicaldiagnosis process, only in assisting with identifying treatment options for patients who have already been diagnosed.[13]
In February 2011, it was announced that IBM would be partnering withNuance Communications for a research project to develop a commercial product during the next 18 to 24 months, designed to exploit Watson's clinical decision support capabilities. Physicians atColumbia University would help to identify critical issues in the practice ofmedicine, where the system's technology may be able to contribute. And also, physicians at theUniversity of Maryland would work to identify the best way that a technology like Watson could interact with medical practitioners to provide the maximum assistance.[14]
In September 2011, IBM and WellPoint (nowAnthem) announced a partnership to utilize Watson's data crunching capability to help suggest treatment options to physicians.[15] Then, in February 2013, IBM and WellPoint gave Watson its firstcommercial application, forutilization management decisions inlung cancer treatment atMemorial Sloan–Kettering Cancer Center.[16]
IBM announced a partnership withCleveland Clinic in October 2012. The company has sent Watson to the Cleveland Clinic Lerner College of Medicine ofCase Western Reserve University, where it will increase its health expertise and assist medical professionals in treating patients. The medical facility will utilize Watson's ability to store and process large quantities of information to help speed up and increase the accuracy of the treatment process. "Cleveland Clinic's collaboration with IBM is exciting because it offers us the opportunity to teach Watson to 'think' in ways that have the potential to make it a powerful tool in medicine", said C. Martin Harris, MD, chief information officer ofCleveland Clinic.[17]
In 2013, IBM andMD Anderson Cancer Center began a pilot program to further the center's "mission to eradicate cancer".[18][19] However, after spending $62 million, the project did not meet its goals and it has been stopped.[20]
On February 8, 2013, IBM announced thatoncologists at theMaine Center for Cancer Medicine and Westmed Medical Group inNew York have started to test the Watsonsupercomputer system in an effort to recommend treatment for lung cancer.[21]
On July 29, 2016, IBM and Manipal Hospitals[22] (a leading hospital chain in India) announced the launch of IBM Watson for Oncology, for cancer patients. This product provides information and insights to physicians and cancer patients to help them identify personalized, evidence-based cancer care options. Manipal Hospitals is the second hospital[23] in the world to adopt this technology and first in the world to offer it to patients online as an expert second opinion through theirwebsite.[24] Manipal discontinued this contract in December 2018.
On January 7, 2017, IBM and Fukoku Mutual Life Insurance entered into a contract for IBM to deliver analysis to compensation payouts via its IBM Watson Explorer AI, this resulted in the loss of 34 jobs and the company said it would speed up compensation payout analysis via analysing claims and medical record and increase productivity by 30%. The company also said it would save ¥140m in running costs.[25]
It is said that IBM Watson will carry the knowledge-base of 1000 cancer specialists, which will bring a revolution in the field of healthcare. IBM is regarded as a disruptive innovation. However, the stream of oncology is still in its nascent stage.[26]
Several startups in the healthcare space have been effectively using seven business model archetypes to take solutions[buzzword] based on IBM Watson to the marketplace. These archetypes depends on the value generated for the target user (e.g. patient focus vs. healthcare provider and payer focus) and value capturing mechanisms (e.g. providing information or connecting stakeholders).[27]
In 2019, Eliza Strickland calls "the Watson Health story [...] a cautionary tale of hubris and hype" and provides a "representative sample of projects" with their status.[28] A 2021 post from the Association for Computing Machinery (ACM) titled "What Happened To Watson Health?" described the portfolio management challenges of Watson Health given the number of acquisitions involved in the division creation in 2015, and its near-total emphasis on the "Blue Washing" process over acquisition customer-base needs.[29]
On January 21, 2022, IBM announced that it would sell Watson Health to the private equity firm ofFrancisco Partners.[30]
The subsequent motive of large based health companies merging with other health companies, allows for greater health data accessibility.[31] Greaterhealth data may allow for more implementation of AIalgorithms.[32]
A large part of industry focus of implementation of AI in the healthcare sector is in theclinical decision support systems.[33] As the amount of data increases, AI decision support systems become more efficient. Numerous companies are exploring the possibilities of the incorporation ofbig data in the health care industry.[34]
IBM'sWatson Oncology is in development atMemorial Sloan Kettering Cancer Center andCleveland Clinic.[35] IBM is also working withCVS Health on AI applications inchronic disease treatment and withJohnson & Johnson on analysis of scientific papers to find new connections fordrug development.[36] In May 2017, IBM andRensselaer Polytechnic Institute began a joint project entitled Health Empowerment by Analytics, Learning and Semantics (HEALS), to be explored using AI technology to enhance healthcare.[37]
Some other large companies that have contributed to AI algorithms for use in healthcare include:
Microsoft's Hanover project, in partnership withOregon Health & Science University's Knight Cancer Institute, analyzes medical research to predict the most effectivecancer drug treatment options for patients.[38] Other projects includemedical image analysis oftumor progression and the development of programmablecells.[39]
Google'sDeepMind platform is being used by the UKNational Health Service (NHS) to detect certain health risks through data collected via a mobile app.[40] A second project with the NHS involves analysis of medical images collected fromNHS patients to develop computer vision algorithms to detect cancerous tissues.[41]
Intel's venture capital arm (Intel Capital) recently invested in startup Lumiata, which uses AI to identify at-risk patients and develop care options.[42]
Artificial intelligence in healthcare is the use ofcomplex algorithms and software to emulate humancognition in the analysis of complicated medical data. Specifically, AI is the ability for computer algorithms to approximate conclusions without direct human input.
What distinguishes AI technology from traditional technologies in health care is the ability to gain information, process it and give a well-defined output to the end-user. AI does this throughmachine learningalgorithms. These algorithms can recognize patterns in behavior and create its own logic. In order to reduce the margin of error, AI algorithms need to be tested repeatedly. AI algorithms behave differently from humans in two ways: (1) algorithms are literal: if you set a goal, the algorithm can't adjust itself and only understand what it has been told explicitly, (2) and algorithms areblack boxes; algorithms can predict extremely precise, but not the cause or the why.[43]
The primary aim of health-related AI applications is to analyze relationships between prevention or treatment techniques and patient outcomes.[44] AI programs have been developed and applied to practices such asdiagnosis processes,treatment protocol development,drug development,personalized medicine, andpatient monitoring and care. Medical institutions such asThe Mayo Clinic,Memorial Sloan Kettering Cancer Center,[45][46] andNational Health Service,[47] have developed AI algorithms for their departments. Large technology companies such asIBM[48] andGoogle,[47] and startups such as Welltok and Ayasdi,[49] have also developed AI algorithms for healthcare. Additionally, hospitals are looking to AI solutions[buzzword] to support operational initiatives that increase cost saving, improve patient satisfaction, and satisfy their staffing and workforce needs.[50] Companies are developingpredictive analytics solutions[buzzword] that helphealthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing length of stay andoptimizing staffing levels.[51]
The following medical fields are of interest in artificial intelligence research:
The ability to interpret imaging results withradiology may aid clinicians in detecting a minute change in an image that a clinician might accidentally miss. A study atStanford created an algorithm that could detectpneumonia at that specific site, in those patients involved, with a better average F1 metric (a statistical metric based on accuracy and recall), than the radiologists involved in that trial.[52] The radiology conference inRadiological Society of North America has implemented presentations on AI in imaging during its annual meeting. The emergence of AI technology in radiology is perceived as a threat by some specialists, as the technology can achieve improvements in certain statistical metrics in isolated cases, as opposed to specialists.[53][54]
Recent advances have suggested the use of AI to describe and evaluate the outcome ofmaxillo-facial surgery or the assessment ofcleft palate therapy in regard to facial attractiveness or age appearance.[55][56]
In 2018, a paper published in the journal ofAnnals of Oncology mentioned thatskin cancer could be detected more accurately by an artificial intelligence system (which used a deep learningconvolutional neural network) than bydermatologists. On average, the human dermatologists accurately detected 86.6% of skin cancers from the images, compared to 95% for the CNN machine.[57]
There are many diseases out there but there are also many ways that AI has been used to efficiently and accurately diagnose them. Some of the diseases that are the most notorious areDiabetes, andCardiovascular Disease (CVD), which are both in the top ten for causes of death worldwide, and have been the basis behind a lot of the research/testing to help get an accurate diagnosis. Due to such a highmortality rate being associated with these diseases, there have been efforts to integrate various methods in helping get accurate diagnosis.
An article by Jiang, et al. (2017)[58] demonstrated that there are multiple different types of AI techniques that have been used for a variety of different diseases. Some of these techniques discussed by Jiang, et al. include:Support vector machines,neural networks,decision trees, and many more. Each of these techniques are described as having a “training goal” so “classifications agree with the outcomes as much as possible…”.[58]
To demonstrate some specifics for disease diagnosis/classification, there are two different techniques used in the classification of these diseases which include using "Artificial Neural Networks (ANN) andBayesian Networks (BN)”.[59] From a review of multiple different papers within the timeframe of 2008–2017,[59] it was observed within them which of the two techniques were better. The conclusion that was drawn was that “the early classification of these diseases can be achieved by developing machine learning models such as Artificial Neural Network and Bayesian Network.” In another conclusion, Alic, et al. (2017)[59] was able to draw was that between the two; ANN and BN is that ANN was better and could more accurately classify diabetes/CVD with a mean accuracy in “both cases (87.29 for diabetes and 89.38 for CVD).
The increase ofTelemedicine, has shown the rise of possible AI applications.[60] The ability to monitor patients using AI may allow for the communication of information to physicians if possible disease activity may have occurred.[61] A wearable device may allow for constant monitoring of a patient and also allow for the ability to notice changes that may be less distinguishable by humans.
Electronic health records are crucial to the digitalization and information spread of the healthcare industry. However logging all of this data comes with its own problems like cognitive overload and burnout for users. EHR developers are now automating much of the process and even starting to use natural language processing (NLP) tools to improve this process. One study conducted by the Centerstone research institute found thatpredictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response at baseline.[62] Meaning that using an AI tool that scans EHR data would pretty accurately predict the cause of disease in a person.
Improvements inNatural Language Processing led to the development of algorithms to identifydrug-drug interactions in medical literature.[63][64][65][66]Drug-drug interactions pose a threat to those taking multiple medications simultaneously, and the danger increases with the number of medications being taken.[67] To address the difficulty of tracking all known or suspected drug-drug interactions, machine learning algorithms have been created to extract information on interacting drugs and their possible effects frommedical literature. Efforts were consolidated in 2013 in the DDIExtraction Challenge, in which a team of researchers atCarlos III University assembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms.[68] Competitors were tested on their ability to accurately determine, from the text, which drugs were shown to interact and what the characteristics of their interactions were.[69] Researchers continue to use this corpus to standardize the measure of the effectiveness of their algorithms.[63][64][66]
Other algorithms identify drug-drug interactions from patterns in user-generated content, especially electronic health records and/or adverse event reports.[64][65] Organizations such as theFDA Adverse Event Reporting System (FAERS) and theWorld Health Organization’s (WHO)VigiBase allow doctors to submit reports of possible negative reactions to medications. Deep learning algorithms have been developed to parse these reports and detect patterns that imply drug-drug interactions.[70]