This article needs to beupdated. Please help update this article to reflect recent events or newly available information.(August 2022) |
Robotic sensing is a subarea ofrobotics science intended to providesensing capabilities torobots. Roboticsensing provides robots with the ability to sense their environments and is typically used as feedback to enable robots to adjust their behavior based on sensed input. Robot sensing includes the ability to see,[1][2][3] touch,[4][5][6] hear[7] and move[8][9][10] and associated algorithms to process and make use of environmental feedback and sensory data. Robot sensing is important in applications such asvehicular automation, robotic prosthetics, and for industrial, medical, entertainment and educational robots.
Visual sensing systems can be based on a variety of technologies and methods including the use ofcamera,sonar,laser andradio frequency identification (RFID)[1] technology. All four methods aim for three procedures—sensation, estimation, and matching.
Image quality is important in applications that require excellent robotic vision. Algorithms based onwavelet transform that are used for fusing images of differentspectra and differentfoci result in improved image quality.[2] Robots can gather more accurate information from the resulting improved image.
Visual sensors help robots to identify the surrounding environment and take appropriate action.[3] Robots analyze the image of the immediate environment based on data input from the visual sensor. The result is compared to the ideal, intermediate or end image, so that appropriate movement or action can be determined to reach the intermediate or final goal.
Electronic skin refers toflexible,stretchable andself-healing electronics that are able to mimic functionalities of human or animal skin.[12][13] The broad class of materials often contain sensing abilities that are intended to reproduce the capabilities of human skin to respond to environmental factors such as changes in heat and pressure.[12][13][14][15]
Advances in electronic skin research focuses on designing materials that are stretchy, robust, and flexible. Research in the individual fields of flexible electronics andtactile sensing has progressed greatly; however, electronic skin design attempts to bring together advances in many areas of materials research without sacrificing individual benefits from each field.[16] The successful combination of flexible and stretchable mechanical properties with sensors and the ability to self-heal would open the door to many possible applications includingsoft robotics, prosthetics, artificial intelligence and health monitoring.[12][16][17][18]
Recent advances in the field of electronic skin have focused on incorporating green materials ideals and environmental awareness into the design process. As one of the main challenges facing electronic skin development is the ability of the material to withstand mechanical strain and maintain sensing ability or electronic properties, recyclability and self-healing properties are especially critical in the future design of new electronic skins.[19]Examples of the current state of progress in the field of robot skins as of mid-2022 are a robotic fingercovered in a type of manufactured living human skin,[20][21] an electronic skin giving biologicalskin-like haptic sensations and touch/pain-sensitivity to a robotic hand,[22][23] a system of an electronic skin and a human-machine interface that can enableremote sensedtactile perception, andwearable or robotic sensing of many hazardous substances andpathogens,[24][25] and a multilayertactile sensorhydrogel-based robot skin.[26][27]
As robots and prosthetic limbs become more complex the need for sensors capable of detecting touch with high tactile acuity becomes more and more necessary. There are many types oftactile sensors used for different tasks.[28] There are three types of tactile sensors. The first, single point sensors, can be compared to a single cell, or whiskers, and can detect very local stimuli. The second type of sensor is a high spatial resolution sensor which can be compared to a human fingertip and is essential for the tactile acuity in robotic hands. The third and final tactile sensor type is a low spatial resolution sensor which has similar tactile acuity as the skin on one's back or arm.[28] These sensors can be placed meaningfully throughout the surface of a prosthetic or a robot to give it the ability to sense touch in similar, if not better, ways than the human counterpart.[28]
Touch sensory signals can be generated by the robot's own movements. It is important to identify only the external tactile signals for accurate operations. Previous solutions employed theWiener filter, which relies on the prior knowledge of signal statistics that are assumed to be stationary. Recent solution applies anadaptive filter to the robot's logic.[4] It enables the robot to predict the resulting sensor signals of its internal motions, screening these false signals out. The new method improves contact detection and reduces false interpretation.
[29]Touch patterns enable robots to interpret human emotions in interactive applications. Four measurable features—force, contact time, repetition, and contact area change—can effectively categorize touch patterns through the temporal decision tree classifier to account for the time delay and associate them to human emotions with up to 83% accuracy.[5] The Consistency Index[5] is applied at the end to evaluate the level of confidence of the system to prevent inconsistent reactions.
Robots use touch signals to map theprofile of a surface in hostile environment such as a water pipe. Traditionally, a predetermined path was programmed into the robot. Currently, with the integration oftouch sensors, the robots first acquire a random data point; the algorithm[6] of the robot will then determine the ideal position of the next measurement according to a set of predefined geometric primitives. This improves the efficiency by 42%.[5]
In recent years, using touch as a stimulus for interaction has been the subject of much study. In 2010, the robot seal PARO was built, which reacts to many stimuli from human interaction, including touch. The therapeutic benefits of suchhuman-robot interaction is still being studied, but has shown very positive results.[30]
Accurate audio sensors require low internal noise contribution. Traditionally, audio sensors combineacoustical arrays and microphones to reduce internal noise level. Recent solutions combine alsopiezoelectric devices.[7] These passive devices use thepiezoelectric effect to transform force tovoltage, so that thevibration that is causing the internal noise could be eliminated. On average, internal noise up to about 7dB can be reduced.[7]
Robots may interpret strayed noise as speech instructions. Currentvoice activity detection (VAD) system uses thecomplex spectrum circle centroid (CSCC) method and a maximum signal-to-noise ratio (SNR)beamformer.[31] Because humans usually look at their partners when conducting conversations, the VAD system with two microphones enable the robot to locate the instructional speech by comparing the signal strengths of the two microphones. Current system is able to cope with background noise generated by televisions and sounding devices that come from the sides.
Robots canperceive emotions through the way we talk and associated characteristics and features.Acoustic andlinguistic features are generally used to characterize emotions. The combination of seven acoustic features and four linguistic features improves the recognition performance when compared to using only one set of features.[32]
Machine olfaction is the automated simulation of thesense of smell. An emerging application in modern engineering, it involves the use of robots or other automated systems to analyze air-borne chemicals. Such an apparatus is often called anelectronic nose or e-nose. The development of machine olfaction is complicated by the fact that e-nose devices to date have responded to a limited number of chemicals, whereasodors are produced by unique sets of (potentially numerous) odorant compounds. The technology, though still in the early stages of development, promises many applications, such as:[33]quality control infood processing, detection anddiagnosis in medicine,[34]detection of drugs, explosives and other dangerous orillegal substances,[35] disaster response, andenvironmental monitoring.
One type of proposed machine olfaction technology is via gassensor array instruments capable of detecting, identifying, and measuring volatile compounds. However, a critical element in the development of these instruments ispattern analysis, and the successful design of a pattern analysis system for machine olfaction requires a careful consideration of the various issues involved in processing multivariate data: signal-preprocessing,feature extraction,feature selection,classification, regression,clustering, and validation.[36] Another challenge in current research on machine olfaction is the need to predict or estimate the sensor response to aroma mixtures.[37] Somepattern recognition problems in machine olfaction such as odor classification and odor localization can be solved by using time series kernel methods.[38]Theelectronic tongue is an instrument that measures and comparestastes. As per the IUPAC technical report, an “electronic tongue” as analytical instrument including an array of non-selective chemical sensors with partial specificity to different solution components and an appropriate pattern recognition instrument, capable to recognize quantitative and qualitative compositions of simple and complex solutions[39][40]
Chemical compounds responsible for taste are detected by humantaste receptors. Similarly, the multi-electrode sensors of electronic instruments detect the same dissolvedorganic andinorganic compounds. Like human receptors, each sensor has a spectrum of reactions different from the other. The information given by each sensor is complementary, and the combination of all sensors' results generates a unique fingerprint. Most of thedetection thresholds of sensors are similar to or better than human receptors.
In the biological mechanism, taste signals are transduced by nerves in the brain into electric signals. E-tongue sensors process is similar: they generate electric signals asvoltammetric andpotentiometric variations.
Taste quality perception and recognition are based on the building or recognition of activatedsensory nerve patterns by the brain and the taste fingerprint of the product. This step is achieved by the e-tongue'sstatistical software, which interprets the sensor data into taste patterns.For example,robot cooks may be able to taste food for dynamic cooking.[41]

Automated robots require a guidance system to determine the ideal path to perform its task. However, at the molecular scale,nano-robots lack such guidance system because individual molecules cannot store complex motions and programs. Therefore, the only way to achieve motion in such environment is to replace sensors with chemical reactions. Currently, a molecular spider that has onestreptavidin molecule as an inert body and threecatalytic legs is able to start, follow, turn and stop when came across differentDNAorigami.[8] The DNA-based nano-robots can move over 100 nm with a speed of 3 nm/min.[8]
In aTSI operation, which is an effective way to identify tumors and potentially cancer by measuring the distributed pressure at the sensor's contacting surface, excessive force may inflict a damage and have the chance of destroying the tissue. The application of robotic control to determine the ideal path of operation can reduce the maximum forces by 35% and gain a 50% increase in accuracy[9] compared to human doctors.
Efficient robotic exploration saves time and resources. The efficiency is measured byoptimality and competitiveness. Optimal boundary exploration is possible only when a robot has square sensing area, starts at the boundary, and uses theManhattan metric.[10] In complicated geometries and settings, a square sensing area is more efficient and can achieve better competitiveness regardless of the metric and of the starting point.[10]
Robots may not only be equipped with higher sensitivity and capabilities per sense than all or most[42] non-cyborg humans such as being able to "see" more of the electromagnetic spectrum such asultraviolet and with higher fidelity and granularity,[additional citation(s) needed] but may also be able have more senses[additional citation(s) needed] such as sensing of magnetic fields (magnetoreception)[43] or of various hazardous air components.[25]
Robots may share,[44] store, and transmit sensory data as well as data based on such. They maylearn from or interpret the same or related data in different ways and some robots may have remote senses (e.g. without local interpretation orprocessing or computation such as with common types oftelerobotics or with embedded[45] or mobile "sensor nodes").[additional citation(s) needed] Processing of sensory data may include processes such asfacial recognition,[46]facial expression recognition,[47]gesture recognition and integration of interpretative abstract knowledge.[additional citation(s) needed]