Movatterモバイル変換


[0]ホーム

URL:


WO2025117501A1 - Full field of view virtual reality goggles - Google Patents

Full field of view virtual reality goggles
Download PDF

Info

Publication number
WO2025117501A1
WO2025117501A1PCT/US2024/057391US2024057391WWO2025117501A1WO 2025117501 A1WO2025117501 A1WO 2025117501A1US 2024057391 WUS2024057391 WUS 2024057391WWO 2025117501 A1WO2025117501 A1WO 2025117501A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual reality
eye
reality system
mouse
lens
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/057391
Other languages
French (fr)
Inventor
Daniel A. Dombeck
Domonkos Peter Pinke
John Bassam ISSA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern University
Original Assignee
Northwestern University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern UniversityfiledCriticalNorthwestern University
Publication of WO2025117501A1publicationCriticalpatent/WO2025117501A1/en
Pendinglegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Definitions

Landscapes

Abstract

This invention relates to a virtual reality (VR) system, comprising a pair of concave lenses; and a pair of screens, arranged in relation to the pair of concave lenses and eyes of a subject, for fully illuminating the visual field of view (FOV) of the subject, and configured to illuminate each eye with an about I80-degree field of view in all directions.

Description

FULL FIELD OF VIEW VIRTUAL REALITY GOGGLES
STATEMENT AS TO RIGHTS UNDER FEDERALLY-SPONSORED RESEARCH
This invention was made with government support under Grant No. R01-MH101297, awarded by the National Institute of Health, and Grant No. ECCS-183 5389 awarded by the National Science Foundation. The government has certain rights in the invention.
CROSS-REFERENCE TO RELATED PATENT APPLICATION
This application claims priority to and the benefit of U.S. Provisional Application Serial No. 63/603,208, filed November 28, 2023, which is incorporated herein in its entirety by reference.
FIELD OF THE INVENTION
The present invention generally relates to virtual reality, particularly to full field of view virtual reality goggles.
BACKGROUND OF THE INVENTION
The background description provided herein is to present the context of the invention generally. The subject matter discussed in the background of the invention section should not be assumed to be prior art merely due to its mention in the background of the invention section. Similarly, a problem mentioned in the background of the invention section or associated with the subject matter of the background of the invention section should not be assumed to have been previously recognized in the prior art. The subject matter in the background of the invention section merely represents different approaches, which in and of themselves may also be inventions. Work of the presently named inventors, to the extent it is described in the background of the invention section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the invention.
The large degree of mechanical stability provided by mouse head-fixation has facilitated the use of high-resolution functional microscopy, intracellular patch clamp recording and large- scale single unit recording in behaving mice that are either stationary or running on treadmills. The addition of visual virtual reality simulations, driven in a closed loop by treadmill movements, has provided the ability to study spatial behaviors in head-fixed mice. These systems have been used to investigate neural circuits underlying behaviors such as goal-directed navigation, decision making, accumulation of evidence and path integration. However, despite these advantages, current VR approaches have limitations that potentially reduce rodent immersion in the virtual environment.
Proper illumination of the mouse visual system is particularly challenging due to the wide field of view (FOV) of the mouse eye. Each mouse eye accepts a FOV of about 140 degrees (both in the azimuthal and vertical elevation directions), with about 40 degrees of binocular overlap in the azimuthal plane and more at larger vertical elevations at the resting eye position, leading to a full azimuthal FOV of about 240 degrees and vertical elevation FOV greater than about 200 degrees (panels A-B of FIG. 1). Current virtual reality (VR) systems typically consist of a head-fixed mouse running on a treadmill with a surrounding visual display consisting of either a large, curved screen illuminated by a projector or multiple computer monitors assembled side-by-side. Projection systems typically illuminate about 270 and 160 degrees of the azimuthal and vertical elevation, and monitor based systems about 220 and 140 degrees, leaving portions of the mouse FOV un-illuminated, particularly in the vertical elevation direction in the critical overhead region (panel C of FIG. 1). Additionally, in the distance between the mouse and VR screens in current systems (0.5-1 m), objects in the lab frame are visible (head-fixation bars, optical table-top, lick tube, screen bezels, cameras, etc.). Importantly, the microscope itself is within (and blocking) the overhead field-of-view of the mouse. These immobile objects do not move with the virtual simulation and therefore provide cue-conflicts between the virtual and lab reference frames while also partially blocking views of the virtual world.
Another important and unique feature of the mouse visual system is the large binocular region that mice maintain both to the front, and even more prominently, above their head (panel A of FIG. 1). Because of the separation of the eyes on the animal’s head, real world objects in the binocular FOV region are viewed by each eye at different angles (binocular disparity). The overhead visual region is particularly important for rodent behavior and survival, as mice continually monitor binocular overlap for threats coming from above (panel B of FIG. 1). Current VR systems generate a single rendering of the virtual world, so that each eye sees the same view of objects in the binocular region, eliminating stereo depth information that may be present (panel C of FIG. 1). Furthermore, as mentioned, recording components (such as an upright microscope) and head fixation bars occlude the overhead visual region. Overall, the above limitations lead to deviations in how the visual system is illuminated between real and virtual environments and may reduce the overall immersion of current rodent VR systems.
Lastly, current VR systems are relatively large and often require significant engineering for the inclusion of microscopy or electrophysiology components. Their large size may also be prohibitive for building large scale behavior training arrays, where dozens of mice can be trained on relatively complicated tasks at the same time.
Therefore, a heretofore unaddressed need exists in the art to address the aforementioned deficiencies and inadequacies.
SUMMARY OF THE INVENTION
In one aspect, this invention relates to a virtual reality (VR) system, comprising a pair of concave lenses; and a pair of screens, arranged in relation to the pair of concave lenses and eyes of a subject, for fully illuminating the visual field of view (FOV) of the subject.
In one embodiment, the virtual reality system is configured to image objects displayed on the screen onto the eye retinas of the subject through the concave lens.
In one embodiment, the virtual reality system is configured to illuminate each eye with an about 180-degree field of view in all directions.
In one embodiment, the virtual reality system is configured to separately illuminate each eye for stereo illumination of the binocular zone, thereby excluding a lab frame from view while also accommodating saccades.
In one embodiment, the about 180-degree field of view includes about 140 degrees for each eye FOV and +/- 20 degrees for saccades.
In one embodiment, each concave lens is a positive-meniscus lens having an inner surface operably facing an eye of the subject.
In one embodiment, each concave lens is arranged such that each eye is centered at a predetermined distance from an inner surface of each positive-meniscus lens.
In one embodiment, each screen is a curved illumination display.
In one embodiment, each screen is a flexible light-emitting diode (LED) display.
In one embodiment, the virtual reality system provides a mean resolution of about 2.2 pixel s/degree, or better, across the about 180-degree FOV.
In one embodiment, each screen is a high resolution organic light-emitting diode (OLED) screen configured to increase the pixels/degree to further exceed the acuity of mice or by incorporating other sensory modalities including olfactory auditory and tactile into the virtual simulation, so as to further increase the immersiveness of the VR experience for mice.
In one embodiment, the virtual reality system further comprises a pair of screen holders, each screen holder with a curvature matching that of the screen, to which the screen is affixed; and a pair of lens holders, each lens holder with the lens attached to one end and the other end mated to the screen holder such that the lens is centered at the desired distance from the screen.
In one embodiment, each lens holder is mated to the screen holder with magnets.
In one embodiment, an optical axis is the virtual reality system is aligned with the optical axis of the mouse eye at its resting position.
In one embodiment, when aligned, the lens holder is in a desired position with the eye centered at an about 1 mm distance from an inner surface of each positive-meniscus lens.
In one embodiment, the virtual reality system is compatible with two-photon functional microscopy.
In one embodiment, the virtual reality system allows one to place it under an upright two-photon microscope, providing a full FOV, including the overhead visual region, while imaging.
In one embodiment, a custom shielding system is provided to fit around the objective and connected to a ring on the head of the subject, in order to block light from the illuminated screens from being detected by the microscope’s photodetectors.
In one embodiment, by combining the virtual reality system with two-photon functional imaging, the virtual reality system is usable to establish the existence of large populations of place cells in the hippocampus during virtual navigation, global remapping during an environment change, and the first descriptions of the response of place cells ensembles to overhead looming stimulation.
In one embodiment, the virtual reality system is usable for studying neural coding properties of visual behaviors that utilize the large overhead binocular region thought to play a critical role in many rodent behaviors.
In one embodiment, the virtual reality system further comprises at least one separate, but compatible, optical path for a camera to monitor eye movements and pupil size.
In one embodiment, the virtual reality system further comprises prism mirrors for reorienting the placement of the screens relative to the head of the subject, thereby affording more access to certain brain regions. In one embodiment, the virtual reality system further comprises a data acquisition module for electrical recordings of data and data processing.
In one embodiment, the virtual reality system is compatible with other VR approaches that rotate the animal in conjunction with movements through the virtual space to activate the vestibular system.
In one embodiment, the virtual reality system is wearable by a freely moving subject.
In one embodiment, the virtual reality system is usable for augmented visual reality paradigms in which the other senses, as well as self-motion cues, are preserved.
In one embodiment, the virtual reality system is a miniature rodent stereo illumination VR (iMRSIV) system that is at least about 10 times smaller than existing VR systems.
In one embodiment, the virtual reality system further comprises an eye tracking module configured to determine position and orientation of the eye of the subject for aligning each eye to a proper location with respect to the virtual reality system and measuring the orientation of the eye, once aligned.
In one embodiment, the eye tracking module comprises an infrared (IR) illumination, an IR mirror, and an IR camera positioned in an eye tracking path in relation to the eye of the subject.
In one embodiment, the IR camera comprises CCD (charge-coupled device) and/or CMOS (complementary metal-oxide-semiconductor) sensors.
In one embodiment, the IR illumination comprises one or more IR LEDs emitting IR light with a wavelength of about 850 nm.
In one embodiment, a dichroic film is provided to cover the display to act as the IR mirror in the eye tracking path while transmitting the VR visible scene.
In one embodiment, the dichroic film is an IR reflective and visible passing film.
In one embodiment, the eye tracking module further comprises a visible light filter positioned in the eye tracking path in the front of the camera to block the visible light from the camera and passing the IR light.
These and other aspects of the present invention will become apparent from the following description of the preferred embodiment taken in conjunction with the following drawings, although variations and modifications therein may be affected without departing from the spirit and scope of the novel concepts of the invention. BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings illustrate one or more embodiments of the invention and together with the written description, serve to explain the principles of the invention. Wherever possible, the same reference numbers are used throughout the drawings to refer to the same or like elements of an embodiment.
FIG. 1 shows the mouse visual system and a new concept for mouse virtual reality goggles. Panel A: Mouse visual field of view with monocular (green) and binocular (red) regions shown at resting eye gaze position from top-down and front perspectives. Panel B: Simulated mouse in a cue rich environment, including overhead owl (left), with simulated 140-degree field of view from the two eyes (right). Note the different perspective from each eye of the cheese and owl objects in the binocular overlap region (highlighted in red). Panel C: Simulation of mouse field of view in a computer monitor based VR system (left), with simulated 140-degree field of view from the two eyes (right), binocular overlap region highlighted in red, and representation of overhead microscope (black rectangle above mouse). Note the large (black) region of the visual field of view that is not illuminated by the VR screens, no owl present since the overhead region is not rendered on the screens, and the same perspective from each eye of the cheese in the binocular region. Panel D: Simulation of the mouse field of view using the new concept presented here using goggles (left), with simulated 140-degree field of view from the two eyes (right), and binocular overlap region highlighted in red. Note the different perspective from each eye of the cheese and owl objects in this region, and also note that the full visual field of view is illuminated in each eye. Note that the overhead microscope from C is not visible to the mouse in this setup.
FIG. 2 shows iMRSIV goggle device design and validation. Panel A: Zemax simulated mouse eye retina at a distance of 200 mm from the checkerboard in panel B. Rays for 3 different object points are shown (red, green blue). Panel B: The 482*261 mm checkerboard used as the object in Zemax simulations. Panel C: Real world reproduction of simulated Zemax arrangement, from side and top views. 482*261 mm checkerboard shown on computer monitor 200 mm from an extracted mouse eye. Camera used to view the back of the retina. Panel D: Resulting image of the checkerboard object on the Zemax simulated eye retina, view from the back of the retina. Panel E: Image of the computer monitor checkerboard object on the retina of an extracted mouse eye, as viewed from the camera. Panel F: Our optical system to achieve a 180-degree FOV using a custom designed positive-meniscus lens and a small curved illumination display, shown with mouse eye at correct location. Panel G: Zemax simulation of rays from different screen points traveling through mouse eye to the retina; blue, center of optical axis; red and green, edges of 140-degree eye field of view imaged onto retina; pink and yellow, edges of 180-degree field of view not imaged onto retina, but illuminated on screen for additional FOV for eye saccades. Panel H: Same as panel I, but zoomed in on eye. Panel I: Recreation of the checkerboard arrangement from panels B-C, but in a virtual world using Unity3D. 180-degree FOV of this scene was generated using a single Unity3D camera and a custom fish-eye shader. 140-degree FOV highlighted in red. Schematic shows 140-degree FOV and full 180-degree FOV to accommodate 20-degree gazes. Panel J: Eye model (as in panel H) and simulated recreation of checkerboard using custom fish-eye shader (as in panel I) after 20- degree saccade (gaze rotation). Panel K: Real optical iMRSIV system composed of curved screen and custom lens, along with experimental setup shown underneath. Panel L: The 180- degree FOV from panel I was transferred to the small curved display in Zemax and used as the object, which was imaged onto the mouse eye retina through the positive-meniscus lens; the resulting image of the checkerboard object on the Zemax simulated eye retina is shown here (140-degree eye FOV), view from the back of the retina. Panel M: Checkerboard scene from J was used to illuminate the real OLED screen; the resulting image (through the real positivemeniscus lens) on the retina of an extracted mouse eye is shown, as viewed from a camera at the back of the retina. Panels N-O: Same as panels L-M, but with eye rotated 20-degrees with respect to screen center (as in ray diagram in panel J, left) to simulate a 20-degree saccade.
FIG. 3 shows iMRSIV behavior apparatus and device-eye alignment procedures. Panel A: Left, iMRSIV system connected to 3D micropositioners with metal bars, and incorporated into a head-fixed behavior apparatus with treadmill and reward delivery system. Right, photo of mouse in iMRSIV system. Panel B: Zoomed in view from A showing iMRSIV system and headplate positions with respect to mouse. Panel C: Schematic of electronics connections for control and reading from iMRSIV system, treadmill and reward delivery systems. Panel D: Left, 3D printed frame used during surgery to position the head-plate at the same location with respect to the eyes across different mice. Middle, view of frame on mouse and aligned to eyes; right, zoomed in view. Panel E: Left, 3D printed frame with pointed target used to position each half of the iMRSIV system with respect to each eye before each session. Middle, view of frame on mount and target aligned to mouse eye; right, back view. Panel F : Left, separated iMRSIV system components. Middle, iMRSIV system aligned to correct location with respect to mouse eyes (only one side is shown for clarity); right, back view.
FIG. 4 shows iMRSIV spatial behaviors: linear track and looming stimulation. Panel A: Linear track used for behavior, with tunnels (brown) and reward (blue) locations shown. Panel B: Trials/min over training days (sessions) for the conventional 5-panel VR group (left) and the iMRSIV group (right). Light grey lines show data for individual mice. Thick line and shading represent mean +/- SEM across mice. Dashed line reproduces mean for 5-panel group. Panel C: Top, prelicking index over training days for the 5-panel VR group (left) and the iMRSIV group (right). Bottom, mean licking rate vs. position (reward position, blue) over all mice in each group for days 1, 2 and 3 of training. Note the anticipatory licking in the iMRSIV group on day 1 that is not present in the 5-panel group. * indicates significant difference (p < 0.05) between groups on day 1 using 2-sample t-test. Panel D: Linear track used for looming behavior, with tunnels (brown), reward (blue) and looming stimulation (black discs) locations shown. Panel E: Top, three examples of behavioral responses to the looming stimulus (dashed line) showing no change in running velocity for a 5-panel group mouse (left) and rapid freezing for one (middle) and fleeing followed by freezing in the other (right) iMRSIV group mice. Bottom, plots of mean velocity vs. time at looming onset (dashed lines) over all mice in each group. Quantification of freeze durations for each mouse across groups also shown, parsed by time to first movement and time to first run. Note the long-lasting freezing in the iMRSIV group that is not present in the 5- panel group. * indicates significant difference (p < 0.05) between groups on day 1 using 2- sample t-test.
FIG. 5 shows two-photon calcium imaging during iMRSIV spatial behaviors. Panel A: iMRSIV+2P. Example two-photon imaging field of CAI neurons labeled with jGCaMP8m and regions of interest (ROIs) segmented from the field. Imaging occurred during iMRSIV system familiar linear track navigation. Panel B: Left, example recording of 253 place cells in a single field. DF/FO vs time for each neuron is shown over several trials along with track position, running velocity and licking. Right, mean transient rate vs. track position for all place cells from the 4 familiar environment imaging sessions (n=4 mice), even trial number firing patterns sorted based on place field location on odd trials, and histogram of place field peak locations. Panel C: Mean transient rate vs. track position for all place cells with fields in both environments, from the 4 environment switch imaging sessions (n=4 mice). Left, place field firing patterns in familiar track even trials, sorted based on familiar track peak odd trial locations, scatter plot of place field peak locations familiar even laps vs familiar odd laps, and histogram of place field peak locations; middle, place field firing patterns in novel track trials, sorted based on familiar track peak locations, scatter plot of place field peak locations familiar laps vs novel laps, and spatial correlations between place fields — familiar odd vs familiar even, familiar vs novel, and novel odd vs novel even; right; place field firing patterns in novel track even trials, sorted based on novel track peak odd trial locations, scatter plot of place field peak locations novel even laps vs novel odd laps, and histogram of place field peak locations. * indicates significant difference (p < 0.05) using 1-sample t-test. Panel D: Mean transient rate vs. track position for all place cells during looming sessions, from the 11 looming imaging sessions (n=4 mice). Left, place field firing patterns from pre-loom even trials, sorted based on pre-loom peak odd trial locations, scatter plot of place field peak locations pre-loom even laps vs pre-loom odd laps; middle, place field firing patterns in post-loom track trials, sorted based on pre-loom peak track locations, scatter plot of place field peak locations pre-loom laps vs post-loom laps; Left, place field firing patterns from post-loom even trials, sorted based on post-loom peak odd trial locations, scatter plot of place field peak locations post-loom even laps vs post-loom odd laps; bottom, spatial correlations between place fields — pre-loom odd vs pre-loom even, pre-loom vs post-loom, and post-loom odd vs post-loom even, along with spatial correlations vs track position for the three comparisons. * indicates significant difference (p < 0.05) using 1-sample t-test. Panel E: Bayesian decoding of mouse location based on CAI neuron firing patterns. Top, example session showing actual mouse track location vs decoded position, where the encoding model was built with some pre-loom trials and decoding was applied to the remaining pre-loom trials (left) or applied to the post-loom trials (right). Bottom, decoding position error vs track position for pre- encoding/pre-decoding (left) and pre-encoding/post-decoding (right) — pre-pre reproduced in grey for comparison. F. Two examples of decoded position probability vs time (heat maps, top) during several pre-loom trials and during the freezing periods, along with plots (bottom) of actual position (black) and decoded position (peak probability, orange). Note the high correspondence between actual and decoded positions during pre-loom trials, but the large difference between mouse actual and decoded positions during the freezing periods.
FIG. 6 shows the mouse visual system and a new concept for mouse virtual reality goggles (20-degree saccade). Panel A: Mouse visual field of view with monocular (green) and binocular (red) regions shown at resting eye gaze position from top-down and angled perspectives. Panel B: Same as A, but with 20-degree forward saccade in both eyes; note expanded binocular zone. Panels C-E: Columns 1, 2, 3 reproduced from panels B-D of FIG. 1. Columns 4,5 same as 2,3, but with 20-degree forward saccade in both eyes.
FIG. 7 shows fisheye shader in Unity used to generate large FOV and compensate for spherical aberrations of the iMRSIV lens. The iMRSIV lens that we used introduced a pincushion distortion (top row), as simulated using Zemax. Thus, we first applied a fisheye distortion to the input image (bottom left); when that image is passed through the lens, as simulated in Zemax, the output image (represented with red in the overlay image) is now largely undistorted (bottom right) and highly similar to the original checkerboard input image (represented with cyan in the overlay image).
FIG. 8 shows quantification of similarity between Zemax and real mouse retinal projections for monitor and iMRSIV displays. Panel A: Resulting image of the checkerboard object on the Zemax simulated eye retina with monitor at a distance of 200 mm, view from the back of the retina (same as panel D of FIG. 2). Edges of the checkerboard were detected and overlaid in red (‘edge detection’). Panel B: Resulting image of the checkerboard object on the Zemax simulated eye retina with iMRSIV, view from the back of the retina (same as panel L of FIG. 2). Edges of the checkerboard were detected and overlaid in cyan (‘edge detection’). Panel C: Vertex points were selected from the checkerboard on the Zemax-simulated retina images (monitor from panel A, red dots; iMRSIV from panel B, blue dots) and superimposed on the Zemax-simulated retina image with the monitor (from panel A). Panel D: Deviation between vertices shown in panel C. The Cartesian distance between pairs of points is calculated and then normalized to the total diameter of the eye used in the model. These distances are then averaged over columns or rows of the checkerboard to attain deviation distance as a function of the x-axis or y-axis, respectively. As a coarse estimate, a 1% deviation corresponds to about 0.03 mm (eye diameter about 3 mm) or to about 1.4 degrees (eye diameter about 140 degrees), which is less than the mouse visual acuity of 0.375 cycles/degree (or 2.6 degrees/cycle). Panel E: Superposition of Zemax-simulated retina images or detected edges from monitor (panel A) and from iMRSIV (panel B). Scaling the iMRSIV image by 5% (right) corrects for the slight magnification difference between the two optical systems. Panel F: Image of the real world computer monitor checkerboard object on the retina of an extracted mouse eye (same as panel E of FIG. 2, but now shown across 4 separate eye experiments). After registering images to the Zemax simulated image (from panel A), vertex points were selected from the checkerboard images on the extracted eyes. Detected edges from Zemax simulated image superimposed as well to aid comparison. Panel G: Vertex points (selected from real eye images in panel F) superimposed on the Zemax-simulated retina image with monitor. Panel H: Deviations calculated for each of the 4 eye experiments. Each dot represents data from one eye; line and shading represent mean +/- SEM across the 4 eyes. Panels I-K: Same as panels F-H but using iMRSIV (as in panels L-M of FIG. 2). Panels L-Q: Same as panels F-K but with 20-degree gaze deviation (as in panels J, N and O of FIG. 2).
FIG. 9 shows optical distortions incurred by misalignment of iMRSIV system. Panels A- C: We tested the distortions incurred by misalignment of the iMRSIV system relative to the eye using Zemax simulations. The default configuration is 1 mm of distance from the inner curve of the iMRSIV lens to the lens of the eye with no displacement or rotation. In each case, we altered the alignment along one dimension and acquired the Zemax-simulated retina image of the checkerboard pattern. We compared to (and superimposed) the retina image using the default configuration (as used in panel L of FIG. 2 and panel B of FIG. 8) and quantified the deviations as a percentage of the eye diameter. As a coarse estimate, a 1% deviation corresponds to about 0.03 mm (eye diameter about 3 mm) or to about 1.4 degrees (eye diameter about 140 degrees), which is less than the mouse visual acuity of 0.375 cycles/degree (or 2.6 degrees/cycle). Panel A: Lens-eye displacement (axial). The lens-eye distance was increased by +1 mm, +2 mm, or +3 mm from the default distance of 1 mm. Panel B: Lens displacement (lateral). The iMRSIV lens was displaced relative to the eye position by 1 mm, 2 mm, or 3 mm. Panel C: Lens rotation. The iMRSIV lens and display were together rotated relative to the axis of the eye and retina.
FIG. 10 shows curved vs non-curved side comparison. Due to mechanical limitations, we could only curve the screen along one axis (azimuthal). Here we used Zemax to simulate the images formed on the retina with and without curvature of the screen. Optically, distortions between the two axes were practically identical. However, along the curved axis, we achieved a slightly larger FOV. We chose the azimuthal axis because the mouse makes more frequent and larger saccades along this axis, but the curvature could easily be switched to the vertical direction if desired. Panel A: Resulting image of the checkerboard object on the Zemax simulated eye retina with iMRSIV (with and without curvature), view from the back of the retina. Edges of the checkerboard were detected and overlaid in red or cyan (‘edge detection’), respectively, and superimposed (‘overlay of detected edges’). Panel B: Vertex points (selected from ‘Curved’ and ‘Flat’ images in panel A) superimposed on the Zemax-simulated retina image with the curved iMRSIV (‘Curved’ from panel A). Panel C: Deviation between vertices shown in panel B. The Cartesian distance between pairs of points is calculated and then normalized to the total diameter of the eye used in the model. These distances are then averaged over columns or rows of the checkerboard to attain distance as a function of the x-axis or y-axis, respectively. Panel D: Same as panel A but with 20-degree gaze deviation and a full square checkerboard as the display pattern. Edges detected from checkerboard are shown superimposed on the retina images and also each other (‘overlay of detected edges’). The ‘Curved’ screen provides a slightly larger field of view; this is visualized by the vertical straight red and cyan lines, which delineate the edge of the image formed for the ‘Curved’ and ‘Flat’ configurations, respectively.
FIG. 11 shows verification of freezing response to looming stimulus. Panel A: Along with the treadmill velocity, we also took video of the mouse during presentation of a looming stimulus. We quantified any movement by measuring the energy averaged over pixels within an ROI (sum of the square of the time derivative at each pixel). This measure provided a sensitive means of detecting any motion of the mouse, even if the treadmill was not moved. Shown here is a single frame from the movie and three ROIs tested: ‘Mouse’ (red), which selected the whole body; ‘Face’ (blue), which selected the head/neck; and “Paws” (green), which selected the forelimbs. Panel B: For the exemplar mouse (‘C4’) we plotted the treadmill velocity and the energy in each ROI over the course of the entire behavior session during which the loom was presented. Underneath each trace, we also plot a threshold indicator function that detects when the corresponding trace is different from zero. As can be appreciated from the plot, all measures are highly correlated. Importantly, in the time from the looming stimulus until the first movement, all channels show zero motion, verifying that the zero treadmill velocity reflects what is likely true freezing by the mouse (and not simply immobility). Panel C: For four individual experiments, we show the treadmill velocity and the energy in the ‘Mouse’ whole-body ROI for a 2-minute span around the time of the loom.
FIG. 12 shows cortical regions that are accessible with an overhead microscope and 10X objective. Panel A: 3D model of the 2p imaging configuration, showing the mouse skull and eyes, head plate, iMRSIV lens and displays, and the position of the 10X objective and the cone of light centered above the position of CAI. Panel B: Accessible (green) and inaccessible (red) regions of the dorsal surface of cortex using a standard overhead microscope and 10X objective with iMRSIV. Panel C: Overlap of accessible-inaccessible regions along with a mouse brain atlas.
FIG. 13 shows comparison of CAI place cells in iMRSIV system to traditional 5-panel virtual reality. Panel A: Lap-by-lap activity of three exemplar CAI neurons during navigation in iMRSIV. Mean traces are shown underneath. Reliability score, defined as the fraction of laps with significant firing within the respective place field of each neuron, is indicated. Histogram (right inset) shows the distribution of reliability scores for all place cells across 7 imaging sessions using iMRSIV. Panel B: Aggregate place cell data for all imaging sessions on the linear track (including familiar sessions and first part of track switch sessions when the track was familiar; the subset of these for only familiar sessions is shown in panel B of FIG. 5), for both traditional 5-monitor VR and iMRSIV. Mean transient rate vs. track position for all place cells from familiar environment imaging sessions (5-monitor controls: n=5 mice; iMRSIV: n=4 mice), even trial number firing patterns sorted based on place field location on odd trials, and histogram of place field peak locations underneath. Panel C: Quantification of place cell characteristics using four different metrics. Fraction place cells: fraction of cells in a session that are place cells (see Methods). Spatial field width: length of track over which lap-averaged cell firing is greater than 30% of the max, applied to place cells only. Mean spatial information: spatial information score, applied to all cells. Reliability: fraction of laps with significant firing within the place field of that cell, applied to place cells only (see Methods). Each point represents the mean for all cells from one imaging session; black cross represents mean ± SEM across sessions. Statistical tests performed between 5-monitor controls and iMRSIV data (2-sample t-test). Panel D: Population response to looming stimulation. The mean transient rate for a given imaging session was triggered on the time of the onset of the looming stimulus.
FIG. 14 shows IMRSIV goggle design with eye tracking IR optical path (left) and CAD design (right). The IMRSIV visible OLED display is covered with a dichroic film (IR reflective and visible passing) so that it acts as an IR mirror for the eye tracking path while transmitting the VR visible scene. Visible light is blocked from the IR camera by a visible light filter (IR passing).
FIG. 15 shows 3D model of the prototype with light path traced from eye pupil to camera shown in red.
FIG. 16 shows example of eye position and pupil size tracking using the (prototype) eye tracking version of IMRSIV (shown in FIG. 14 in a head-fixed mouse running on a treadmill. Only one side of the IMRSIV system was used for this demonstration.
DETAILED DESCRIPTION OF THE INVENTION
The invention will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. However, this invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this specification will be thorough and complete and fully convey the invention's scope to those skilled in the art. Like reference numerals refer to like elements throughout.
The terms used in this specification generally have their ordinary meanings in the art, within the context of the invention, and in the specific context where each term is used. Certain terms used to describe the invention are discussed below, or elsewhere in the specification, to provide additional guidance to the practitioner regarding the description. For convenience, certain terms may be highlighted, for example using italics and/or quotation marks. The use of highlighting has no influence on the scope and meaning of a term; the scope and meaning of a term are the same, in the same context, whether or not it is highlighted. It will be appreciated that same thing can be said in more than one way. Consequently, alternative language and synonyms may be used for any one or more of the terms discussed herein, nor is any special significance to be placed upon whether or not a term is elaborated or discussed herein. Synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and in no way limits the scope and meaning of the invention or of any exemplified term. Likewise, the invention is not limited to various embodiments given in this specification.
It will be understood that, as used in the description herein and throughout the claims that follow, the meaning of “a”, “an”, and “the” includes plural reference unless the context clearly dictates otherwise. Also, it will be understood that when an element is referred to as being “on” another element, it can be directly on the other element or intervening elements may be present therebetween. In contrast, when an element is referred to as being “directly on” another element, there are no intervening elements present. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, or section without departing from the invention's teachings.
Furthermore, relative terms, such as “lower” or “bottom” and “upper” or “top,” may be used herein to describe one element’s relationship to another element as illustrated in the figures. It will be understood that relative terms are intended to encompass different orientations of the device in addition to the orientation depicted in the figures. For example, if the device in one of the figures, is turned over, elements described as being on the “lower” side of other elements would then be oriented on “upper” sides of the other elements. The exemplary term “lower”, can, therefore, encompasses both an orientation of “lower” and “upper,” depending on the particular orientation of the figure. Similarly, if the device in one of the figures is turned over, elements described as “below” or “beneath” other elements would then be oriented “above” the other elements. Therefore, the exemplary terms “below” or “beneath” can encompass both an orientation of above and below.
It will be further understood that the terms “comprises” and/or “comprising,” or “includes” and/or “including” or “has” and/or “having”, or “carry” and/or “carrying,” or “contain” and/or “containing,” or “involve” and/or “involving, and the like are to be open-ended, i.e., to mean including but not limited to. When used in this specification, they specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, regions, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and this specification, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used in this specification, “around”, “about”, “approximately” or “substantially” shall generally mean within 20 percent, preferably within 10 percent, and more preferably within 5 percent of a given value or range. Numerical quantities given herein are approximate, meaning that the term “around”, “about”, “approximately” or “substantially” can be inferred if not expressly stated.
As used in this specification, the phrase “at least one of A, B, and C” should be construed to mean a logical (A or B or C), using a non-exclusive logical OR. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
The description below is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. The broad teachings of the invention can be implemented in a variety of forms. Therefore, while this invention includes particular examples, the true scope of the invention should not be so limited since other modifications will become apparent upon a study of the drawings, the specification, and the following claims. For purposes of clarity, the same reference numbers will be used in the drawings to identify similar elements. It should be understood that one or more steps within a method may be executed in a different order (or concurrently) without altering the principles of the invention.
Visual virtual reality (VR) systems for head-fixed mice offer the ability to investigate spatial navigation behaviors and perform experiments that are difficult in real world studies. VR also offers the ability to dissect the neural circuitry underlying spatial behaviors using recording techniques that require large platforms and a high degree of mechanical stability. However, despite these advantages, current VR approaches using curved projection screens or multiple monitors are physically large and have limitations that reduce immersion in the virtual environment. For example, these systems do not fully illuminate the visual field of view of mice, do not stereoscopically illuminate the binocular zone, and leave visible many elements of the lab -frame.
In view of the foregoing mentioned limitations, this invention discloses a virtual reality goggle for mice, i.e., an iMRSIV (Miniature Rodent Stereo Illumination VR) system, which is about 10x smaller than existing VR systems. The iMRSIV system separately illuminates each eye for stereo illumination of the binocular zone and illuminates each eye with an about 180- degree field of view, thus excluding the lab frame from view while also accommodating saccades. By directly comparing mouse behavior in the iMRSIV system to a more typical monitor based setup, it is found that mice navigating using iMRSIV engaged in virtual behaviors more quickly, based on their anticipatory licking behavior. Mice also displayed dramatic freezing and fleeing reactions to looming stimulation of the overhead binocular zone — a region difficult to illuminate with current VR systems. By combining the iMRSIV system with two-photon functional imaging, we establish the existence of large populations of place cells in the hippocampus during virtual navigation, global remapping during an environment change, and the first descriptions of the response of place cell ensembles to overhead looming stimulation. Without intent to limit the scope of the invention, exemplary embodiments of the invention are given below.
In some embodiments, the virtual reality system comprises a pair of concave lenses; and a pair of curved screens, arranged in relation to the pair of concave lenses and eyes of a subject, for fully illuminating the visual field of view (FOV) of the subject. The subject can be an animal or a human being.
In some embodiments, the virtual reality system is configured to image objects displayed on the curved screen onto the eye retinas of the subject through the concave lens.
In some embodiments, the virtual reality system is configured to illuminate each eye with an about 180-degree field of view in all directions.
In some embodiments, the virtual reality system is configured to separately illuminate each eye for stereo illumination of the binocular zone, thereby excluding a lab frame from view while also accommodating saccades.
In some embodiments, the about 180-degree field of view includes about 140 degrees for each eye FOV and +/- 20 degrees for saccades.
In some embodiments, each concave lens is a positive-meniscus lens having an inner surface operably facing an eye of the subject.
In some embodiments, each concave lens is arranged such that each eye is centered at a predetermined distance from an inner surface of each positive-meniscus lens.
In some embodiments, each curved screen is a curved illumination display.
In some embodiments, each curved screen is a flexible light-emitting diode (LED) display.
In some embodiments, the virtual reality system provides a mean resolution of about 2.2 pixel s/degree, or better, across the about 180-degree FOV.
In some embodiments, each curved screen is a high resolution organic light-emitting diode (OLED) screen configured to increase the pixels/degree to further exceed the acuity of mice or by incorporating other sensory modalities including olfactory auditory and tactile into the virtual simulation, so as to further increase the immersiveness of the VR experience for mice.
In some embodiments, the virtual reality system further comprises a pair of screen holders, each screen holder with a curvature matching that of the screen, to which the curved screen is affixed; and a pair of lens holders, each lens holder with the lens attached to one end and the other end mated to the screen holder such that the lens is centered at the desired distance from the curved screen.
In some embodiments, each lens holder is mated to the screen holder with magnets.
In some embodiments, an optical axis is the virtual reality system is aligned with the optical axis of the mouse eye at its resting position.
In some embodiments, when aligned, the lens holder is in a desired position with the eye centered at an about 1 mm distance from an inner surface of each positive-meniscus lens.
In some embodiments, the virtual reality system is compatible with two-photon functional microscopy.
In some embodiments, the virtual reality system allows one to place it under an upright two-photon microscope, providing a full FOV, including the overhead visual region, while imaging.
In some embodiments, a custom shielding system is provided to fit around the objective and connected to a ring on the head of the subject, in order to block light from the illuminated screens from being detected by the microscope’s photodetectors.
In some embodiments, by combining the virtual reality system with two-photon functional imaging, the virtual reality system is usable to establish the existence of large populations of place cells in the hippocampus during virtual navigation, global remapping during an environment change, and the first descriptions of the response of place cells ensembles to overhead looming stimulation.
In some embodiments, the virtual reality system is usable for studying neural coding properties of visual behaviors that utilize the large overhead binocular region thought to play a critical role in many rodent behaviors.
In some embodiments, the virtual reality system further comprises at least one separate, but compatible, optical path for a camera to monitor eye movements and pupil size.
In some embodiments, the virtual reality system further comprises prism mirrors for reorienting the placement of the curved screens relative to the head of the subject, thereby affording more access to certain brain regions.
In some embodiments, the virtual reality system further comprises a data acquisition module for electrical recordings of data and data processing.
In some embodiments, the virtual reality system is compatible with other VR approaches that rotate the animal in conjunction with movements through the virtual space to activate the vestibular system. In some embodiments, the virtual reality system is wearable by a freely moving subject.
In some embodiments, the virtual reality system is usable for augmented visual reality paradigms in which the other senses, as well as self-motion cues, are preserved.
In some embodiments, the virtual reality system is a miniature rodent stereo illumination VR (iMRSIV) system that is at least about 10 times smaller than existing VR systems.
In some embodiments, the virtual reality system further comprises an eye tracking module configured to determine position and orientation of the eye of the subject for aligning each eye to a proper location with respect to the virtual reality system and measuring the orientation of the eye, once aligned.
In some embodiments, the eye tracking module comprises an infrared (IR) illumination, an IR mirror, and an IR camera positioned in an eye tracking path in relation to the eye of the subject.
In some embodiments, the IR camera comprises CCD (charge-coupled device) and/or CMOS (complementary metal-oxide-semiconductor) sensors.
In some embodiments, the IR illumination comprises one or more IR LEDs emitting IR light with a wavelength of about 850 nm.
In some embodiments, a dichroic film is provided to cover the display to act as the IR mirror in the eye tracking path while transmitting the VR visible scene.
In some embodiments, the dichroic film is an IR reflective and visible passing film. In one embodiment, the eye tracking module further comprises a visible light filter positioned in the eye tracking path in the front of the camera to block the visible light from the camera and passing the IR light.
The novel VR system according to the invention, among other things, possesses advantages over the existing VR systems. For example, the novel VR system can operably illuminate the full field of view of mouse, with custom lens design, making the system fully immersive (no external frame visible). The novel VR system can also provide additional field of view for saccades (eye movements). In addition, the novel VR system can provide stereo illumination of the visual system of mouse, thereby providing depth information, and solutions for alignment to eyes.
The invention may have widespread applications in rodent neuroscience including, but is not limited to, memory research, fear research, visual processing, sensory integration; rodent behavior training, and immersive human VR goggles. These and other aspects of the invention are further described below. Without intent to limit the scope of the invention, exemplary instruments, apparatus, methods, and their related results according to the embodiments of the invention are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the invention. Moreover, certain theories are proposed and disclosed herein; however, in no way they, whether they are right or wrong, should limit the scope of the invention so long as the invention is practiced according to the invention without regard for any particular theory or scheme of action.
EXAMPLE 1:
FULL FIELD-OF-VIEW VIRTUAL REALITY GOGGLES FOR MICE
In this example, we developed a mouse virtual reality goggle system (panel D of FIG. 1) where the lab frame is not visible and each mouse eye is separately illuminated, providing a full field of view (with additional field of view for saccades, FIG. 6) and stereo illumination to the binocular visual zone, including the critical overhead region. We validate our optical design using optics simulations and real world measures of retina illumination. We then demonstrate the usefulness of our small footprint iMRSIV system by training mice in a virtual navigation task and compared their task engagement to a conventional VR system. We demonstrate that mice displayed dramatic freezing and fleeing reactions to looming stimulation of the overhead binocular zone, a behavioral response that was not observed in the conventional VR system. Additionally, we performed two-photon functional imaging of CAI populations during these tasks and demonstrate that our system activates large populations of place cells during linear track navigation and global remapping during a track switch paradigm. Finally, in an experiment made uniquely possible with our system, we recorded place cell firing patterns before, during and after overhead looming stimulation and discovered previously unknown place cell remapping and remote encoding patterns. iMRSIV goggle device design and validation
To facilitate the design and testing of the optics of our iMRSIV system, we sought to use Zemax computer simulations. We began with existing Zemax models of the mouse eye and examined the image generated on the simulated eye retina by a 482x261mm checkerboard object at a distance of 200mm (panels A, B and D of FIG. 2). We reproduced this arrangement in the real world by generating the same size and distance checkerboard pattern on a computer monitor and examined the resulting image of the object on the retina of an extracted mouse eye (panels C and E of FIG. 2). We then made minor modifications to the Zemax mouse eye model to obtain highly similar retinal illumination patterns between the real world experiments and simulations (panels D-E of FIG. 2; panels A and F-H of FIG. 8), thus obtaining a simulation environment that we could use to design and test our iMRSIV system.
Using this simulation environment, we were able to test dozens of optical designs to achieve our goal of illuminating each mouse eye with a about 180-degree field of view (FOV) in all directions (140 degrees for each eye FOV +/- 20 degrees for saccades), which we aimed to achieve with a single screen and single lens per eye (panel F of FIG. 2). We found that it is not possible to realize a 180-degree FOV with a plano-convex or bi-convex lens. Instead, a positivemeniscus lens would be required so that each mouse eye could be fully enveloped. We arrived at a unique optical system that achieved a about 180-degree FOV by using a custom designed, high NA, positive-meniscus lens paired with a small curved illumination display (panels G-H of FIG. 2).
We then asked whether, in a Zemax simulation, we could reproduce the above checkerboard illumination of the mouse retina using our small display and lens combination. We first needed to compensate for aberrations introduced by our optical element, so we used Unity3D to recreate the checkerboard arrangement in a virtual world simulation (482><261mm checkerboard object at a 200mm distance) and generated a 180-degree FOV of this scene using a single Unity3D camera and a custom fish-eye shader (panel I of FIG. 2). Not only did the fisheye shader provide for a 180-degree FOV, but it also compensated well for the spherical aberrations of our custom lens (FIG. 7). This 180-degree FOV was displayed on the small curved display in Zemax and used as the object so we could examine the resulting image (through the positive-meniscus lens) on the simulated mouse eye retina, which was centered at a 1mm distance from the inner surface of the lens. We found that the resulting simulated retina image of the checkerboard pattern (panel L of FIG. 2; 140 of the available 180-degree FOV) was highly similar to both the Zemax model of the checkerboard object (panel D of FIG. 2) and the real world arrangement of this scene using a computer monitor and extracted mouse eye (panel E of FIG. 2 and panels A-E of FIG. 8).
We then fabricated our new optical system for real-world use and validation. The lens was custom ground to our design specifications (Shanghai-Optics) and a small, flexible, round OLED screen (1.39 in diameter, 400*400 pixel, Innolux) was used for the display. The final assembly included two separate 3D printed parts, which were held together with magnets: first, a screen holder with a curvature matching that of our simulations, to which the OLED screen was affixed; second, a cone-shaped lens holder with the lens glued to one end and the other end mated to the screen holder with three magnets such that the lens is centered at the desired distance from the screen (6.3mm; panel K of FIG. 2). Across a 180-degree FOV, our system provided a mean resolution of 2.2 pixel s/degree, which is higher than the 0.375 cycles/degree visual acuity estimated for the C57BL6 mouse strain used for behavior and imaging experiments here.
We then asked whether, using our real optical system, we could reproduce the above checkerboard illumination of the mouse retina. We used the Unity3D virtual checkerboard arrangement (482*261mm checkerboard object at a 200mm distance) and fisheye shader to generate a 180-degree view of this scene (from panel I of FIG. 2) and illuminated the real OLED screen with this view. We then examined the resulting image of the checkerboard object on the retina of an extracted mouse eye, centered at a 1mm distance from the inner surface of the positive-meniscus lens (panel M of FIG. 2). We found that the resulting retina image of the checkerboard pattern (panel M of FIG. 2; 140 of the available 180-degree FOV) was highly similar to both the Zemax models of the checkerboard object (panel L of FIG. 2; panels I K of FIG. 8) and the real world arrangement of this scene using a computer monitor (panel E of FIG. 2; panels F-K of FIG. 8). Finally, we simulated a 20-degree saccade using both Zemax simulations and real world extracted mouse eye experiments (panels J, N and O of FIG. 2). We found highly similar retina images of the checkerboard pattern in both cases (panels N-0 of FIG. 2; panels L-Q of FIG. 8). Thus, we established and validated a new optical system able to illuminate a about 180-degree FOV for the mouse eye — 140-degree FOV +/- 20-degrees in each direction for saccades.
The small OLED screen, custom positive-meniscus lens and 3D printed parts make up one half (one eye) of our mouse virtual reality goggle system; a duplicate assembly was therefore made for the other eye. Unity3D cameras were then used to generate the about 180-degree FOV for each display for each eye. For proper placement of the Unity3D cameras, we used a virtual mouse and placed one camera at each eye, with the camera angles with respect to the mouse set at the resting eye position (22-degrees vertical elevation from the lambda-bregma plane and 64- degrees azimuth from the midline). This arrangement ensured proper views of the virtual world for each eye and, because of the different position of the cameras in Unity 3D, each one created a different view of objects in the binocular region, thus providing stereoscopic information to the mouse. Together, these components, the renderings of the virtual world for each eye and the custom Unity3D shader complete our goggle system, which we refer to as the iMRSIV (Miniature Rodent Stereo Illumination VR) system. iMRSIV behavior apparatus and device-eye alignment procedures
We next sought to use the iMRSIV system with head fixed mice running on a treadmill. We added a cylindrical treadmill, water reward delivery system, capacitive lick sensor, fixed head-plate mounting posts to hold the head-plate at the same location for each mouse in each session, structural components to hold each half of the iMRSIV system in place and two 3D micromanipulators for alignment of the iMRSIV system (panels A-B of FIG. 3). We modified Unity3D to communicate with a National Instruments DAQ card (PCIe-6323), and then used a digital output signal from this card to control a solenoid for water rewards, a digital input for lick sensor monitoring, and a quadrature encoder input to read treadmill velocity from a rotary encoder (panel C of FIG. 3).
Our Zemax simulations and extracted eye experiments highlighted the need for precise alignment and positioning of the iMRSIV system lenses relative to the eyes of the mouse (FIG. 9). For angular alignment, the manipulators and structural components were designed to align the iMRSIV system optical axis with the optical axis of the mouse eye at its resting position (22- degrees vertical elevation from the lambda-bregma plane and 64-degrees azimuth from the midline), which matched the angles used for the virtual cameras in Unity3D. With the angle set, proper positioning then required each eye to be centered at a 1 mm distance from inner surface of each positive-meniscus lens. To achieve this, we used two steps. First, during the surgery, we used a 3D printed frame to implant a head-plate at the same location with respect to the eyes across different mice (panel D of FIG. 3, Methods). This frame included indicator targets, which when aligned to the center of the mouse eyes, specified the correct head-plate implantation location. These surgical methods greatly reduced mouse to mouse and session to session variability in the location of the eyes with respect to the head-plate, and therefore with respect to the iMRSIV system (further details on the procedure and quantification of expected variance are provided under “iMRSIV alignment procedures” in Methods). Second, for final positioning for each mouse at the start of each behavioral session, mice were head fixed using the mounting posts and then a 3D printed frame was used to position each half of the iMRSIV system with respect to each eye (panel E of FIG. 3). The conical lens holder was removed from each half (pulled off from magnetic attachment) and replaced with a frame with an eye target, which was aligned to each eye using the micropositioners. Once aligned, the target was again replaced with the conical lens holder, which was now in the correct position with the eye centered at a 1 mm distance from the lens (panel F of FIG. 3). iMRSIV spatial behaviors: linear track and looming stimulation
With the optical components of our iMRSIV system validated, the system incorporated into a head-fixed behavior apparatus with treadmill and rewards, and methods in place to precisely position the system over the eyes of the mice, we next sought to determine whether mice could learn to perform a virtual linear track task. In this task, mice navigate a linear track to a fixed reward location; trained mice in such tasks develop behaviors indicative of anticipation of the reward location. We trained 7 water scheduled mice to run along a 3 -meter virtual linear track (panel A of FIG. 4). The mice started in a tunnel at the track start, ran across an open field, received a 3 uL water reward near the end of the field, entered a tunnel and were then teleported back to the track start for another trial. As a point of comparison, we also trained 13 separate water scheduled mice on the same task but using a conventional virtual reality system consisting of 5 computer monitors mounted side by side. All aspects of the environments, training and mice were the same between the groups, with the only difference being the type of visual display rendering the virtual simulations (iMRSIV vs 5 panel). We found that three mice in the iMRSIV group ran more than 0.5 trials per minute over the first about 45 minute sessions (iMRSIV group mean of 0.79 +/- 1.40 trial/minute in first session), and on average mice in this group reached expert levels (3.21 +/- 1.80 trials/minute) after about 6 days of training (panel B of FIG. 4). Only 1 mouse in the 5-panel group ran more than 0.5 trials per minute in the first session, and on average mice in this group reached expert levels at a similar number of days of training as the iMRSIV group (5-panel group mean of 0.13 +/- 0.16 trials/minute on day 1 and 2.75 +/- 1.88 trials/minute on day 6; no significant difference in trials/minute between groups, p > 0.05, 2- sample t-test).
We then examined anticipatory licking behavior of the mice in the different groups across days (panel C of FIG. 4), since this measure has previously been used to assess behavioral learning and task engagement in linear tracks. Importantly, we found that on the first day of training, the iMRSIV group displayed significantly more anticipatory licking than the 5-panel group. This was quantified by calculating a pre-licking index to determine the fraction of licking (excluding consumptive licking) that occurred just before the reward compared to other track locations (day 1 : iMRSIV group pre-lick index = 0.64 +/- 0.24, 5-panel group pre-lick index 0.06 +/- 0.11; 2-sample t-test, p=0.0004). Over 4-6 days, mice in both groups displayed similar prelicking indices, with most non-consumptive licking occurring just before the reward location. Therefore, mice engaged in a virtual navigation behavior more quickly using the iMRSIV system compared to the existing monitor based VR system, and refined their licking behavior to become highly precise and location specific after several days of training.
To take advantage of the unique access the iMRSIV system provides to illuminate the overhead visual scene, we sought to reproduce freezing and fleeing behaviors observed during real-world open field looming stimulation of the overhead region. After mice were trained on the first linear track (panel A of FIG. 4) for at least 6 days (about 2-3 rewards/minute), they were switched to a new linear track with the same tunnels at the ends, but an open field in the middle with few cues (panel D of FIG. 4). Once this track became familiar (2-3 sessions), we introduced a single, sudden overhead looming stimulus (overhead increasing size sphere, with shadow over the mouse) in the middle of a behavior session when the mice were in the center of the open field (panel D of FIG. 4). The iMRSIV group mice displayed a dramatic reaction (panel E of FIG. 4 and FIG. 11). All mice froze after the stimulus (mean freezing time until first detected movement after the first looming stimulus in each mouse: 3.95 +/- 4.4 minutes): 3 of 7 mice froze immediately at the start of the looming stimulus, while 4 of 7 mice rapidly accelerated for several seconds (fleeing behavior) before freezing. When mice began running down the track again after freezing, their running velocity was slower than before (26.7 +/- 11.4 cm/s in the minute before loom vs 11.6 +/- 6.6 cm/s in the minute after freezing; paired t-test, p=0.043). The same stimulus was applied to a subset of the 5-panel group mice but, due to the lack of overhead illumination, the mice were not able to see the looming sphere and could only see its shadow. This lack of overhead illumination is typical for current VR systems, and even though an overhead monitor could be added to this (or any) current VR system to provide looming stimulation, it would be occluded by overhead recording equipment (two-photon microscope, recording electrode, etc.; see panel C of FIG. 1) and, further, would not generate a stereoscopic view, thus limiting any depth information provided. We therefore used the 5-panel group mice as a control for a reaction to the shadow. The 5-panel mice did not respond to the shadow (or stimulus), they displayed no acute freezing or acceleration/deceleration and their running speed before and after the stimulus was not different (21.4 +/- 10.8 cm/s in the minute before loom vs 22.5 +/- 9.0 cm/s in the minute after loom; paired t-test, p=0.53). Therefore, the iMRSIV system provides experimental control of the overhead visual scene, which can be used to provide looming stimulation to head fixed mice, leading to a dramatic freezing response that lasts for minutes.
Two-photon calcium imaging during iMRSIV spatial behaviors
Finally, we demonstrate the compatibility of the iMRSIV system with two-photon functional microscopy. Typically, functional microscopy systems occlude the overhead space above head-fixed mice (panel C of FIG. 1C). However, the design of the iMRSIV system allowed us to place it under an upright two-photon microscope, providing a full FOV, including the overhead visual region, while imaging (FIG. 12). To block light from the illuminated screens from being detected by the microscope’s photodetectors, we designed a custom shielding system that fit around the objective and connected to a ring on the head of the mouse. Four out of the seven iMRSIV behavior group mice were injected with a virus to induce expression of jGCaMP8m in CAI of the dorsal hippocampus and were implanted with a chronic hippocampal imaging window (panel A of FIG. 5); these four mice were used for the subsequent imaging experiments.
After at least 6 days of training, two-photon imaging was performed to record CAI neural activity while mice performed the linear track task (8 imaging sessions from 4 mice, 4 familiar track sessions, 4 environment switch sessions; 450x450 pm field size, 30.28 frames/sec, 28.7 minutes/imaging session, 297 +/- 95 neurons segmented/field). In a familiar linear track, we identified a large number of place cells (204 +/- 61 place cells per field, 69.4 +/- 8.8% of active cells had significant place fields), and these place cells were highly reliable, with most cells active on the majority of trials (mean reliability: 0.51 +/- 0.14; panel A of FIG. 13). These cells tiled the full linear track on single trials but with a significantly larger number of place fields near the reward zone, similar to previous observations of real and virtual place cells (panel B of FIG. 5). We also performed imaging of CAI neural activity in mice trained using the 5- panel displays instead of iMRSIV. Results were similar for CAI place cells in the 5-panel group (panels B-C of FIG. 13).
Next, in the middle of a familiar track behavior session, we suddenly switched mice into a novel environment that they had not seen before and monitored place cell firing patterns before and after the switch. We found that, across the population, many familiar place cells did not have place fields in the novel environment (44+/-12%). The cells that did have fields across both familiar and novel environments were not spatially correlated (spatial correlation familiar to novel: 0.15+/-0.05). A new population of cells and fields was therefore recruited to encode the novel environment, indicative of global remapping, as previously observed in studies of real and virtual place cell populations (panel C of FIG. 5).
Lastly, we recorded the firing patterns of the CAI populations before during and after the looming stimulation (panel D of FIG. 5). In addition to imaging during the first looming session (as described in FIG. 4), we also applied a single looming stimulus on several subsequent sessions and imaged during these sessions as well (total of 11 looming sessions across 4 imaged mice, 2-3 sessions/mouse). As on the first looming session, mice displayed dramatic freezing and fleeing on subsequent days (immediate freezing in 4 of 11 sessions, fleeing followed by freezing in 7 of 11 sessions, mean freezing time to first movement 3.6 +/- 3.8 min; running speed of 22.9 +/- 8.7 cm/s in the minute before loom vs 9.8 +/- 4.9 cm/s in the minute after freezing, paired t- test, p=0.0016; including all time periods, average running speed after the loom was 79% of speed before the loom), with sustained CAI activity in the first few seconds following the looming stimulus (panel D of FIG. 13). Interestingly, we found that many place cells with place fields in the middle of the track (around the loom location) before looming stimulation either lost their place fields or had their place fields move to a new track location in the trials after freezing. In contrast, place cells with place fields at the beginning and end of the track (in and around the tunnel, first and last 50 cm of track) displayed less change in their place firing patterns (0.54 vs 0.67 spatial correlation values middle vs ends of track for before vs after looming; paired t-test, p=0.039). This difference in place cell population changes between the middle and ends of the track was also seen using Bayesian decoding analysis (panel E of FIG. 5). The encoding model was built from (a subset of) trials before the looming stimulus and then used to decode mouse position either in (the remaining) trials before the stimulus or in the trials after the stimulus. While the decoding error before the looming stimulus was relatively low (23.6 +/- 12.7 cm), the error was significantly larger for trials after freezing (45.9 +/- 23.9 cm, p=0.02, paired t-test), with particularly larger decoding error in the middle compared to the ends of the track (53.3 +/- 24.9 cm vs 26.5 +/- 23.1 cm, p=0.0003, paired t-test). Interestingly, when we decoded mouse position during the freezing period, we found in several cases that the decoded position was persistently remote from the mouse’s actual position (panel F of FIG. 5). For example, in one mouse that froze near the exit of the tunnel near the beginning of the track, the decoded position was further down the track at the location of the loom that had just occurred (mean of 112.3 cm away). In a different mouse, which froze in the open field, the decoded position was at the end of the track in the tunnel (mean of 79.8 cm away). These results highlight the strength of the iMRSIV system for studying neural coding properties of visual behaviors that utilize the large overhead binocular region thought to play a critical role in many rodent behaviors.
Discussion
Here, we developed virtual reality goggles for mice in a system we refer to as iMRSIV. We validated our system using a combination of Zemax simulations and real-world experiments where we examined the image formed on the retina of an extracted mouse eye. Our system separately illuminates each eye to achieve stereo illumination of the binocular zone and illuminates each eye with a about 180-degree field of view (140-degree eye view +/- 20-degrees for saccades in every direction), thus excluding the lab frame from view. This FOV is larger than the FOV achieved using conventional rodent VR systems and further provides stereo illumination of the binocular region in a way not currently possible with existing VR systems.
We show that mice engaged (performed anticipatory licking) more quickly in a virtual linear track task in the iMRSIV system compared to a conventional monitor based mouse VR system and were able to achieve expert level performance within several days of training, similar to conventional VR. It is unclear exactly why mice were able to engage in the task in the iMRSIV system more quickly, but we hypothesize that it is because their full FOV was illuminated and the conflicting lab frame was not visible. This advantage, combined with the potential depth information provided by the stereoscopic illumination of the binocular region, may combine to provide a more immersive experience, facilitating increased task engagement and spatial awareness. Alternatively, other differences between the VR environments unrelated to immersiveness itself may also play a role, such as minor differences in screen brightness, the additional handling time needed at the start of iMRSIV sessions to align the system, etc., and further work will be needed to isolate the exact benefits conferred by the iMRSIV system. By combining the iMRSIV system with functional two-photon microscopy, we established the existence of large populations of place cells during virtual linear track navigation and global remapping during a track change paradigm, all of which were highly similar to place cell recordings from previous VR and real world experiments. In fact, for a familiar linear track, properties of CAI place cells were highly similar between our iMRSIV and 5-monitor cohorts (FIG. 13), perhaps reflecting the fact that place cells have the capacity to encode across a wide range of contexts to produce a stable internal map of the environment. Even in our traditional 5- monitor setup, an overabundance of visual cues were present, which could thus allow CAI to compensate for deficits in depth cues, for instance.
Previous research has found different behavioral responses to side or front looming stimuli compared to overhead looming stimuli, emphasizing the importance of being able to access and visually stimulate the overhead region for looming studies. We took advantage of the ability of the iMRSIV system to illuminate the overhead binocular region of mice — a region difficult to illuminate with current VR systems — by generating a looming stimulus. Similar to real world looming stimulus paradigms, mice displayed dramatic and long lasting freezing reactions, either immediately or after a short fleeing response. By combining the iMRSIV system with two-photon functional imaging of CAI neurons, we were able to provide the first descriptions of the response of place cell ensembles to overhead looming stimulation. We found that place fields around the looming location became unstable and significantly changed their firing patterns, while place fields farther away appeared largely stable. Further, we found several examples during the freezing period itself where the decoded position differed significantly and persistently from the actual mouse location. The decoded position in these examples was either in the tunnel or around the loom location, which differed from the actual location of the mice. In the first case, it is tempting to think that such a result is indicative of the mouse thinking of a remote, safe location rather than the current, open field actual location, perhaps for planning purposes. Or, in the second case, indicative of the mouse thinking of the location of the loom that had just occurred, perhaps for planning or memory consolidation. However, future work will be required to fully establish the details and behavioral roles of such phenomenon. While it might be possible to perform hippocampal recordings from freely moving mice with head-mounted microscopes or electrodes during overhead looming stimulation, no such recordings have yet been reported. It is possible that there are complications due to the head-mounted recording components partly occluding the overhead binocular region (panel C of FIG. 1). Our system does not suffer from these issues because the real world overhead region is not seen by the mice since iMRSIV system covers their FOV completely.
A potential future advantage of our iMRSIV system is the significant size reduction compared to existing rodent VR systems (about lOx smaller). This could allow for the iMRSIV system to be more easily combined with microscope or other recording systems that do not have sufficient space, or are of an unusual geometry, and thus are not compatible with larger current VR systems. Further, miniaturizing virtual reality systems, as we have done here, is likely to facilitate the building and use of large scale training arrays where dozens of mice can be trained in parallel. While future work will determine the behavioral studies that benefit from iMRSIV, the following are potential experiments that may be enabled by our system: 1) use of stereo depth for studies of object localization and predation; 2) elimination of static lab frame visual inputs allowing for studies of visual flow, which may drive head direction signals in head-fixed mice;
3) looming or other overhead stimuli, as already explored; 4) improved depth perception may result in avoidance of perceived virtual cliffs, allowing for elevated maze tasks and measures of anxiety in VR. As more improvements to the immersiveness of VR are made, the gap between VR and freely-moving experiments may continue to close. In parallel, technological improvements are enabling studies that were previously not possible, such as multi-photon imaging in freely moving mice or rotational head-fixed systems to add vestibular inputs to VR. Still, each approach carries distinct advantages. VR offers the ability to dissect neural circuitry underlying behaviors using recording techniques that require physically large platforms that have yet to be miniaturized for use in freely moving rodents. Further, VR allows for manipulations that are impractical or impossible in physical environments. On the other hand, neural circuitry of course evolved to drive behavior in the freely moving case, and replicating the natural profiles of every sensory modality from freely moving rodents becomes a highly technically difficult endeavor for VR. Thus, both approaches continue to provide complementary advantages.
One potential limitation of our current iMRSIV system compared to conventional VR is the increased difficulty in tracking eye position and pupil size. These measurements are relatively easy in a conventional system where a camera can be added in the ample space between the VR screens and the mouse, but in the iMRSIV system this space is small and difficult to access. Thus, even though we provided a full 140 degree FOV with +/- 20 degrees for saccades in each eye, we were not able to determine how much of this extra FOV the mice used and how often they performed saccades. Thus, it will be important for future versions of the iMRSIV system to incorporate a separate, but compatible, optical path for a camera to monitor eye movements and pupil size.
Another limitation of our iMRSIV system is optical access to certain brain areas for imaging. Due to steric hindrances between the display screens and the microscope objective, not all brain regions can be readily accessed, especially those that are far rostral and lateral (FIG. 12). Further, some microscope objectives, especially those with high NA and low working distance, will have more severe steric hindrances. In most cases, these issues may be physically impossible to avoid due to the large view angles of the mouse eye and its proximity to rostral and lateral brain regions. Indeed, most microscope objectives are probably within the mouse’s visual fields when imaging rostral and lateral brain regions in a conventional setup. Certain approaches could help circumvent these issues, however. For example, a long working distance objective (as used here) or a thin GRIN lens can be used to provide additional clearance. Also, removing the outer casing on the objective (as done here) can help add additional clearance. Tilting the head of the mouse could also be used to add some clearance for the system. Yet another approach, if the experiment allows for it, is to use electrical recordings instead of imaging. For example, neuropixels are compatible with iMRSIV, though the size of the headstage (about 6x7x2mm) will also need to be taken into account to work around the steric constraints. As for the iMRSIV system itself, it may be possible to reduce the physical size in future versions, which would make it easier to access rostral and lateral brain region. This could be accomplished using a different lens design combined with a smaller screen. Alternatively, other optical designs, such as those that incorporate a prism mirror, could reorient the placement of the display screens relative to the mouse’s head, affording more access to certain brain regions.
Another limitation of our system is the contact between the screens and some of the whiskers on the mouse’s face. This obstruction has two consequences. First, it limits access to experimenters studying the whiskers or barrel cortex. Second, because the mouse can contact the displays by whisking, it may reduce immersiveness of the system. While some workarounds may be possible, such as whisker trimming or future reductions in the size of iMRSIV itself, these are important limitations to consider when planning iMRSIV experiments.
Other future improvements and additions to the iMRSIV system could further increase the immersiveness of the VR experience for mice. This could come with improved optical components, such as higher resolution OLED screens to increase the pixels/degree to further exceed the acuity of mice or by incorporating other sensory modalities, such as olfactory auditory and tactile, into the virtual simulation. Further, here we used a cylindrical treadmill and focused on linear track tasks, but 2D open field tasks might be accessible in head fixed mice using the iMRSIV system combined with a spherical treadmill. Though vestibular cues will still be missing in rigid head fixed systems, which might preclude proper activation of 2D spatial firing patters in place and grid cells, our iMRSIV system should be compatible with newer VR approaches that rotate the animal in conjunction with movements through the virtual space to activate the vestibular system. Such a combination of techniques might lead to methods to study 2D grid and place field neural circuitry in head fixed mice. Finally, with future miniaturization, goggles small and light enough to be carried by a freely moving mouse might be achievable. Such a system could be used for augmented visual reality paradigms in which the other senses, as well as self-motion cues, are preserved.
Experimental model and study participants
Animals
All animal procedures were approved by the Northwestern University Institutional Animal Care and Use Committee. All mice were housed in a vivarium with a reversed light/dark cycle (12 hours light during the night) and all experiments were performed during the day (during their dark cycle). For behavior and CAI imaging experiments, about 12 week old adult C57BL/6J male mice (The Jackson Laboratory, strain #000664) were used. For extracted eye experiments, 10-14 week old adult BALB/c mice (Charles River) of both sexes were used.
Methods
Headplate and CAI cannulation surgery
Headplates were aligned and attached to adult C57BL/6J male mice as detailed below. For a subset of 9 mice (4 iMRSIV and 5 control) mice, CAI cannulation and virus injection was also performed to allow for imaging.
Anesthetized mice (1-2% isoflurane in 0.5 L/min 02) were head-fixed to a stereotaxic apparatus (Model 1900, David Kopf Instruments). The skull was leveled and aligned to bregma. We then positioned the eyes relative to the headplate holder by using a custom 3D-printed alignment tool (FIG. 3D). This tool has two prongs that approximate the position of the center of each eyeball. Once centered, the tool was replaced with a custom titanium headplate (1 mm thick, eMachineShop). This headplate is the same size and shape as the alignment tool but without the centering prongs. Further details on our alignment procedures are provided below under “iMRSIV alignment procedure”. Dental cement (Metabond, Parkell) was used to adhere the headplate to the skull. Mice were monitored closely for 24 hours and given 3-5 days to recover before water restriction and behavioral training were begun.
In mice used for CAI imaging, before attaching the headplate we performed a small craniotomy (0.5 mm) and, using a beveled glass micropipette, injected about 60 nL of AAV1- syn-jGCaMP8m-WPREl (Addgene catalog #162375, diluted about 8x from 2.5el3 GC/ml stock into phosphate buffered solution) into right CAI (2.3 mm caudal, 1.8 mm lateral, 1.3 mm beneath dura). Then a stainless steel cannula with an attached 2.5 mm No. 1 coverslip (Potomac Photonics) was implanted over CAI.2
Extracted eye experiment
To measure the image formed on the mouse retina, we used explanted eyes from BALB/c mice. We chose to use albino mice because the retinal epithelium is not pigmented and thus images formed on the retina using the visible spectrum can be observed by photographing the back of the explanted eye. We chose this particular strain (BALB/c) because the size of the eye and the optical parameters are highly similar to the strain of mice used for our behavior and imaging experiments (C57BL/6J).3
Mice were deeply anesthetized with isoflurane (2% in 0.5 L/min 02). The eye was then removed, the optic nerve transected, and any connective tissue cleared. The eye was then placed on a custom 3D printed mount that centered the eye relative to the rest of the setup. For the setup, a camera (Basler acA5472 with a 25 mm f/1.4 lens, HR978NCNH1198) was mounted behind the eye while the display (either large flat-panel display or the iMRSIV lens and miniature display) were mounted in front of the eye. Using rotation and translation stages, we could control the distance from the displays to the eye and we could adjust the rotation of the eye relative to the display. We could also lock in the camera so that it rotated with the eye, or we could rotate it independently of the eye. The desired image was displayed on the screen and the image formed on the retina as observed from behind the eye (caudal view) or from the side (lateral view) was photographed using the camera.
Zemax simulations and new eye model
Replicating the real-world full stereo vision of mice in virtual reality is optically challenging for a number of reasons. First, the mouse eye has a large angle field of view that spans 140 degrees plus another +/- 20 degrees for saccades4-6. Not only does this large angle occupy a large space, it also requires solutions that account for the Petzval field curvature. Second, the binocular region requires a solution that can deliver different perspectives of the same object to each eye (thus transmitting binocular disparity information). Because these views physically overlap, either the two eyes need to receive different images from the same position (such as is accomplished when viewing 3D televisions through polarized lenses) or the optical field needs to be separated so that the physical space illuminating the medial portions of each eye are different.
To aid our testing, we used Zemax software to simulate the optics of our design solution. We began with a published model of the mouse eye6 and modified the exact coverage of the retina. On the basis of published eye parameters?, 8, we expanded the retinal periphery to cover the 140+ degrees FOV (with a 3 mm eye diameter). For simulating retinal projections in Zemax, we used Image Simulation, Geometric Bitmap Analysis and Geometric Image Analysis with 3 mm diameter retina parameter (300x300 pixel, 0.01 pixel size), Pupil size was 0.4 mm to 1.6 mm to simulate constricted and dilated pupils as well. We then validated our updated model by comparing the results obtained from our extracted eye experiments (detailed above) using a fixed display with a checkerboard pattern and compared them to the Zemax simulation using identical parameters and the same display image (FIG. 2).
Next, we sought to identify a lens design that, when placed between a miniature display and the mouse eye, could accomplish our design goals. In particular, we wanted a solution that would project 180-degrees of the visual field while also physically occupying a small footprint so that the displays for each eye would not intersect. We began with off-the-shelf lenses but found that plano-convex or bi-convex lenses would not be adequate to cover the 180-degree range. Instead, a positive-meniscus lens was used such that, across the curvature of the eye, we could preserve an approximately fixed distance between the lens and the eye. The lens is a custom design (manufactured by Shanghai Optics) with the following specifications: diameter = 12 mm, center thickness = 4 mm, front radius curvature = 6 mm, back radius curvature = 10 mm, material: H-K9L glass. Further, the display itself (6.3 mm from the front surface of the lens) needed to have some curvature as well to reduce distortions introduced when the display-to-lens distance varies across different angles. We used a display radius of curvature of 60 mm along the azimuthal axis. As the physical constraints of the display only allow for curvature along one axis, the display remained flat (not curved) along the other (vertical) axis. The difference in curvature did not introduce any substantially different distortions along the two axes (FIG. 10). Finally, the curved screens were both rotated 25-degrees, around the eye axis, vertically from the nose. Once we had identified the exact parameters for the desired lens design in Zemax, we had the lens fabricated (Shanghai Optics). The actual lens and our design was then validated using the explanted eye as detailed above and as shown in FIG. 2. Custom 3D parts to mount the lens and display were printed using tough PLA on a 3D printer (UltiMaker S3).
Quantification of similarity between retina images
To compare a pair of retina images, our procedure involved choosing the individual vertices of the checkerboard pattern in both images. Then we calculated the Cartesian distance between each pair of vertices. This distance was then normalized by the size of the retina (3 mm). These distances were then averaged over columns or rows of the checkerboard to attain deviation distance as a function of the x-axis or y-axis, respectively.
We compared the Zemax image of the checkerboard for the monitor with the Zemax image of the same scene using iMRSIV. We found that the % deviation was small and much of the discrepancy was due to a difference in magnification between the images. The images were practically identical when the iMRSIV retina image is scaled up by 5% (panels A-E of FIG. 8).
Next, we compared the real mouse retina images to the various Zemax models (monitor, iMRSIV with the default configuration, and iMRSIV or monitor with a 20-degree rotation). Here, we first registered the images (rotation, translation, and scaling) and then calculated the % deviation. This was performed for several different experiments (different eyes from two mice), allowing us to estimate the standard deviation of this measure (panels F-Q of FIG. 8).
Unity environment and hardware
Virtual reality environments were rendered in Unity3D. The same computer was also used to synchronize behavior and two-photon imaging data during execution of VR simulations.
In the Unity environment, a model mouse was used to approximate the position and orientation of the mouse eyes. The angles were then replicated in the positions and orientations of the physical displays relative to the actual mouse (22-degrees vertical elevation from the lambda-bregma plane and 64-degrees azimuth from the midline).8 We also needed to correct for the spherical aberrations of our custom lens. To accomplish this goal, we used a fisheye shader. Each eye’s display is covered by a shader. A 360-degree sphere camera in Unity is projected by seven 90-degree cameras onto a sphere overlay. The sphere is captured by an 8th perspective (70-degree FOV) camera which was placed at 267 mm from the sphere. This fisheye projection corresponds to about 180 degrees FOV, projected onto one circular display. Our custom lens also provides a strong anti-fisheye effect (see https://www.mathworks.com/help/vision/ug/camera- calibration.html); we compared the fish-eye vs. anti-fisheye effect, and we found that they are approximately the opposite effect (inverse transforms), so there was no need to further correct the lens distortion (FIG. 7).
Each lens was then paired with a small, flexible, round OLED screen (1.39 in diameter, 400x400 pixel, Innolux).
For our control experiments using a traditional 5-monitor display, we used five cameras in Unity, angled at increments of 45°, to reproduce the physical locations of the monitors arranged as five sides of an octagon around the mouse. Each monitor was run at a resolution of 1920x1080.9
Refresh rate for both systems (iMRSIV and 5 panel) was 60 Hz, which were driven by a video card (Nvidia RT3070). Monitor brightness per unit area was higher for the round OLED screens of iMRSIV than for the large monitors we used for the traditional 5-panel display. This brightness was measured by collecting light over a 5-mm diameter region of the display using a fiber optic cable pressed against the screen and light collected on the other side using a photodetector (DET-100A, Thorlabs). For a given uniform display (either 50% gray or 100% white), the voltage measured from the photodetector was about 10 fold higher for the OLED screens. However, the exact amount of light reaching the mouse retina in each system is difficult to approximate exactly and is further complicated by differences in pupil diameter (which was not measured here). Overall brightness is a function involving integration of light from all portions of the screens, and an inverse square law describing the reduction in intensity as a function of distance from the source. Based on our estimates of these values, the overall brightness received by the mouse eye was higher in the iMRSIV system. Thus, for particular future applications, the intensity of the virtual environments could be reduced as needed.
Custom scripts were written in C# to enable communication with a data acquisition card (PCIe-6323, National Instruments) from within the Unity runtime environment. We took advantage of the fixed update clock (set to 1 ms) within Unity to gain precise control of all timed events. The data acquisition card (DAQ) was used to output timed digital output to control the opening of a water reward solenoid. The timing was calibrated to provide a volume of 3 pL of water. Inputs to the DAQ included a quadrature encoder and digital signals. The quadrature encoder was used to read running velocity from an optical encoder (E2-5000, US Digital) attached to the axis of the treadmill. These values were converted to a calibrated position along the treadmill in centimeters, which was then used to move the position of the mouse in Unity. Digital inputs were used to read contact between the tongue and the lick spout using a capacitive touch sensor (AT42QT1010, SparkFun) and also two-photon frame times. These signals were all read by the DAQ at 1 kHz. All DAQ data along with environmental variables from Unity (such as mouse position in the VR world, velocity, etc.) were continuously stored during each frame in a dat file. Thus, we could precisely synchronize environmental variables with two-photon imaging frames when processing the data. A 3 -axis translation stage was used to position the lickport (DT12XYZ, Thorlabs). iMRSIV alignment procedure
To position the iMSRIV displays relative to each mouse eye, we developed the following alignment procedure that minimized mouse-to-mouse variability while also permitting adjustments to be made for each mouse. We utilized a newly designed headplate with a couple features that facilitated our experimental approach. First, the grooves for mounting to the headbars were positioned further posteriorly, thus adding clearance from the head mounting bars for the iMRSIV lens + display (and also remaining outside the field-of-view of each eye). Second, the grooves were positioned exactly 30 mm apart, thus allowing precise and reliable mounting using off-the-shelf parts (such as the Thorlabs 30 mm cage system). During surgical implantation, the headbar is aligned to the eyes of each mouse. This alignment is accomplished by first using a custom 3D-printed alignment tool (panel D of FIG. 3). This tool has two prongs situated for positioning to the center of each eyeball. Once the tool is aligned (prongs centered on each eye), the stereotax micromanipulator is fixed while the tool is replaced with a headplate and cemented in place. Thus, the relative position of the headplate mount to the eyes of the individual mouse is fixed (within experimental measurement error).
We also measured the position of the eyes with respect to bregma and found some variability in the position of the eyes relative to bregma. For example, across a cohort of 7 mice, our standard deviation in bregma-eye distance is 0.11 mm medial -lateral, 0.34 mm anterior- posterior, and 0.21 mm dorsal -ventral. However, because we align to the eyes themselves, this variability does not affect our alignment and only slightly affects the accessible brain regions (FIG. 12). Note also that we placed the headplate with the skull leveled to 0 degrees (zero tilt between bregma and lambda), but it is possible that the headplate could be angled without perturbing the visual experience of the mouse.
During behavioral sessions, the headplate is attached to the headbars. To verify the placement of the mouse and that the eyes are correctly positioned relative to the iMRSIV displays, we utilized the following alignment procedure. The goal was to position the display assembly (consisting of the lens holder attached to the display holder) at the desired position relative to the mouse eye lens, (panels F-G of FIG. 2, FIG. 3). The lens holder and display holder are attached using a set of 3 magnets, allowing us to attach and detach the lens holder in a reproducible manner. The assembly is attached to a 3-axis stage (3x MS IS, Thorlabs), allowing precise control of x-y-z position, along with a rotation stage (RP005, Thorlabs). To perform the alignment, first an alignment tool (panel E of FIG. 3) was attached in place of the lens holder. This tool is similar to the lens holder but instead of the lens has a probe at the desired location of the center of the front of the mouse eye lens. Thus, we could position the probe at the eye lens, retract the assembly using the micromanipulator, replace the alignment tool with the lens holder, and return the assembly back to the same position. Any final fine adjustments are then performed using micromanipulators for each iMRSIV display. In practice, however, we found that little to no adjustments were needed between mice.
To measure the precision of our alignment procedures, we replicated our alignment procedure using a replica eye (3.1 mm diameter ball bearing) placed on an xyz translation stage with micrometers precise to <25 microns (PT3, Thorlabs). Briefly, we aligned our target to the center of the ball bearing (FIG. 3E), replaced the target with the iMRSIV lens, and then measured how far the center of the ball bearing was from the center of the lens by using the translation stage to align the ball bearing to the center of the lens (confirmed with a video camera, as in FIG. 2C). We then read off the micropositioner distances needed to center the ball bearing. We repeated this procedure 5 separate times and found x,y errors were 0.31 +/- 0.14 mm and 0.16 +/- 0.05 mm. Meanwhile, our z-distance, which measured the distance between the front edge of the bearing to the iMRSIV lens surface, was 1.10 +/- 0.10 mm, which is within range of the desired 1.0 mm eye-lens distance. We simulated the effect of various misalignments using Zemax and the results, as shown in FIG. 9, indicate that minimal image distortions are incurred (typically less than the visual acuity of mice) for the positioning errors expected during actual experiments.
Behavior Following recovery from surgery, mice were restricted to receiving 0.8-1.0 mL of water each day. Mice were weighed daily and training was begun once weights fell to about 80% of baseline.
For iMRSIV mice, once the mouse was head fixed, an alignment procedure was performed as detailed above (“iMRSIV alignment procedure”) and in FIG. 3. Note that it was not possible to perform truly blinded experiments when comparing iMRSIV mice to the 5-monitor control mice. We however matched training conditions in every aspect that we could by using mice of the same age, water restricting for the same duration with the same target weight, matching the duration of training sessions, etc. We also practiced the iMRSIV alignment procedures beforehand so as to minimize the time and potential discomfort incurred while positioning the screens around the mouse. Once proficient, we were able to perform this alignment within a couple minutes.
Once aligned, the training session was begun. Virtual environments were simulated in Unity. All environments consisted of the same basic structure. Mice start in a tunnel, run to reach a fixed reward location where a water reward is delivered to the lick spout, then continue running to the end of the track, which consists of a tunnel as well. The mice then teleport back to the start tunnel and the task repeats. Track lengths are 3 to 3.5 m. The first stage of training consisted of six sessions, one per day, each lasting about 40 minutes. These were performed in the first linear track. On the next day, a remapping experiment was performed. After at least 10 minutes in the familiar environment (typically about 30-40 or more laps), mice were instantly teleported to a novel environment.
Looming stimuli were then presented in the next two or three sessions. For these experiments, a single loom event was simulated in Unity. The loom consisted of an overhead black disk, 11 which also cast a dark shadow on the ground. The loom sphere (d=37.8 mm) was placed at a height of 200 mm from the mouse, with no visibility initially. After 10 minutes, whenever the mouse next entered into the trigger zone, the loom event was activated. The sphere became visible and started following the mouse without initially descending. As soon as it caught up, the loom sequence began. It descended from 200 mm to a height of 11 mm in 0.3 seconds, remained close for 0.25 seconds, and then returned to 200 mm. The loom descent repeated 3 times, following the animal’s position. Thus, the exact position at which the looming stimulus occurred varied slightly depending on the animal’s exact running behavior. The loom parameters (size, speed, position, and number of repeats) are parameters that can be changed within Unity.
Imaging
In the subset of cannulated mice, we performed two-photon imaging of populations of neurons in CAI of the hippocampus during behavior sessions as described above, either with iMRSIV (4 mice) or with the traditional 5-panel display (5 mice). Imaging was performed using a customized upright microscope. A mode-locked Ti: Sapphire laser (Chameleon Ultra II, Coherent) tuned to 920 nm was raster scanned using a resonant scanning module (Sutter Instruments). Emission light was filtered (FF01-510/84, Semrock) before being collected by a GaAsP PMT (H10770PA-40, Hamamatsu Photonics). Scanimage software (Vidrio) was used to control the microscope and acquire images. A TTL frame sync signal was output to the DAQ of the VR computer to allow for synchronization of two-photon imaging times to the behavior data acquired by Unity. All imaging was performed at 512x512 pixels and 30 Hz using bidirectional scanning.
A 10X objective (UPLFLN, Olympus), with outer housing removed to fit within the geometric constraints, was used for imaging. We removed the outer housing of the objective (unscrewing it) to increase clearance between the objective and the iMRSIV lens holder. In FIG. 12, we delineate regions of the cortex that are accessible using this objective without physically colliding with the iMRSIV lens mounts. As the placement of our headplate (and the iMRSIV system) is relative to the eyes of the animal, the exact relative location of bregma can vary across mice (and correspondingly the position of brain structures relative to bregma will vary as well). For example, across a cohort of 7 mice, our standard deviation in bregma-eye distance is 0.11 mm medial-lateral, 0.34 mm anterior-posterior, and 0.21 mm dorsal -ventral; thus, there is an uncertainty of about 0.25 mm in the boundary of which cortical regions would be accessible with the 10X objective (with removed housing) as shows in FIG. 12. Note that, in a traditional virtual reality system, the objective is within the overhead FOV of the mouse’s vision (panel C of FIG. 1). To prevent iMRSIV display light from reaching the optical path and contaminating the emission PMT, we designed a custom shielding system that consisted of a 3D printed part that fit around the objective and connected to a ring on the head of the mouse. All data was collected at a magnification of 2. OX, which resulted in a field-of-view of 450 pm x 450 pm.
Image processing Two-photon movies were first registered to correct for motion artifacts using rigid registration.12 Next, active cells were detected using Suite2p.l3 Fluorescence traces (brightness- over-time signals) for these cells and associated neuropil were extracted. Then, we used an integrated iterative algorithm to decompose the signal into an inferred summation of four signals: the true activity of the cell (AF/FO), the baseline (FO), the neuropil contamination, and noise. We assume AF/FO is the result of convolution of voltage action potentials with a kernel that reflects the kinetics of intracellular Ca2+ and the Ca2+ sensor jGCaMP8m. Thus, deconvolution is performed to infer firing events.14 For further analysis, we use these firing events after smoothing with a 170-ms Gaussian filter. The “transient rate” refers to the amplitude and frequency of these detected events in a given time window or spatial bin.
Analysis
Prelicking index: This measure quantified whether mice were licking near to the reward during reward approach, indicative of learning of the reward location and anticipation of the reward. We took lickl as the mean number of licks over the 50 cm leading up to the reward location (pre-reward zone: -50 to 0 cm relative to reward location) and lick2 as the mean number of licks in the preceding 150 cm (-200 to -50 cm relative to reward location). The prelicking ratio was then calculated as lickl/(lickl+lick2). Thus, the minimum possible value of 0 indicates no licks in the pre-reward zone while the maximum possible value of 1 indicates all the licks were in the pre-reward zone. We excluded laps if no licks occurred in the defined windows (both lickl and lick2 equal to zero). In rare cases, the lick sensor did not function properly and registered contact throughout. Such laps were detected when the mean contact time across an entire lap was over 40% and were also excluded.
Loom reaction: We qualitatively assessed the initial reaction of mice to the looming stimulus by looking at the running velocity in the 10 seconds around the loom initiation time. Freeze: running velocity immediately decreases and is held at 0 cm/s for an extended period (often for minutes). Flee: running velocity immediately increases, followed after a few second by an extended period of freezing. No reaction: no change in velocity from the prior moments and no extended period of stationarity.
Loom freezing period: For mice that did freeze, we measured the time when mice resumed running. Such running was found by looking for the first moment the running velocity reached half of the maximum running velocity, which was calculated for each mouse as the 98th percentile of running velocity over the entire session. We ignored the first 10 seconds immediately after the loom since some mice initially and transiently increased their running velocity (fleeing) before freezing. We also measured the freezing time until first detected movement since it was possible the mouse resumed movement but without running. To ensure that the treadmill velocity faithfully reported any movements (and not just running), we recorded video of the mouse’s body during the loom sessions. We quantified the energy in a region-of- interest around the body of the mouse (mean across pixels of the square of the time derivative of individual pixels in the region) and found a high correspondence to the treadmill velocity (FIG. H).
Criteria for place cells: For each neuron, spatial information was calculated using binned position (5 pm bins, periods of immobility and reward consumption excluded).15 The calculation was repeated using shuffled data. Neurons with spatial information of at least 0.75 bits/event and that was also larger than 98% or more of shuffles were categorized as place cells.
Calculation of peak location: For each place cell, mean transient rate at each position (1 cm bins) was calculated across laps. The peak location was calculated as the position with the maximal mean transient rate.
Reliability score: For each place cells, we calculated the fraction of laps at which significant firing occurred within the dominant place field of that neuron2.
Cross-validation procedure: Spatial firing maps and other within-environment calculations used cross-validated data. In these cases, data was separated by even and odd laps.
Calculation of correlations: At each position, the Pearson correlation was measured between the vector of population firing under two conditions, thus quantifying similarity of individual neural firing. The two conditions were either taken as the comparison of odd and even laps (for example, with familiar-familiar measures) or all laps across conditions (for example, for familiar-novel measures). The values were then averaged across all positions. For comparison of correlations across positions, we compared the mean correlations on the ends of the track (first and last 50 cm) against the mean correlations in the middle of the track (entire track excluding the first and last 50 cm).
Bayesian decoding: For a given imaging session, population neural activity was used to decode the position of the track. This procedure was performed in two ways. First, for assessing the ability of pre-loom activity to decode post-loom position, we trained the Bayesian decoderl6 using the pre-loom data after binning the data using position along the track. This information was then used to decode the post-loom data, again after already binning for position along the track. For comparison, we also decoded pre-loom position using pre-loom data by splitting the data into odd laps (training set) and even laps (test set). Second, we assessed the decoded position during the freezing period in response to the loom stimulus. To perform this calculation, we trained the Bayesian decoder using all the pre-loom data. This decoder was then applied to the neural activity during the time that the mouse froze in response to the loom stimulus.
Quantification and statistical analysis
Statistical tests used in the paper are indicated where appropriate. Results are reported as mean +/- standard deviation unless otherwise indicated. MATLAB built-in functions were used to perform the statistical tests. The number of animals used is indicated in the figure or in the text, as appropriate. In some cases, we instead report the number of imaging sessions (‘FOVs’); in these cases, the figure legend indicates how many mice were used. Significance was set at p < 0.05.
Data and code availability
Lens design (in Zemax), 3D models of custom equipment used, and VR environments (in Unity) are available at an online repository (https://github.com/DombeckLab/IMRSIV).
Table 1 : Key resources
Figure imgf000045_0001
Figure imgf000046_0001
EXAMPLE 2:
VIRTUAL REALITY GOGGLES WITH EYE TRACKING
The iMRSiV goggle system fully covers the eyes of mice, making it difficult to determine the position and orientation of the eyes. The alignment procedure to align each eye to the proper location with respect to the goggles was time-consuming as it had to be repeated with each eye for each mouse. For the goggle system to be used broadly throughout the neuroscience community, rapid and accurate alignment methods are needed. Further, once alignment is achieved, the eye orientation can change due to saccades, which must be measured in many neuroscience applications. This exemplary embodiment of the virtual reality goggle has achieved both requirements by engineering eye tracking capability into our goggle system for rapid alignment and to monitor eye positions.
Referring to FIGS. 14-15, one exemplary embodiment of the eye-tracking version of IMRSIV is shown. In the embodiment, the IMRSIV uses a miniature CMOS camera (120 Hz frame rate, 800x600 pixels, Raspberry Pi Camera Module 3) and 850 nm IR illumination. Pupil center and diameter was tracked using custom MATLAB software based on a published algorithm (https://doi.org/10.1126/science.aav7893) (FIG. 16). The visible OLED display was illuminated during the acquisition of the eye tracking data in FIG. 16, and it did not interfere with the IR eye tracking path. The current capabilities suffice to extract eye position and pupil diameter and can be used for goggle alignment. FIG. 14 shows an IMRSIV goggle design with eye tracking IR optical path (left) and CAD design (right). The IMRSIV visible OLED display is covered with a dichroic fdm (IR reflective and visible passing) so that it acts as an IR mirror for the eye tracking path while transmitting the VR visible scene. Visible light is blocked from the IR camera by a visible light filter (IR passing). FIG. 15 shows 3D model of the prototype with light path traced from eye pupil to camera shown in red.
FIG. 16 shows example of eye position and pupil size tracking using the (prototype) eye tracking version of IMRSIV (shown in FIG. 14 in a head-fixed mouse running on a treadmill. Only one side of the IMRSIV system was used for this demonstration, which shows the ability to perform eye tracking with this system.
The foregoing description of the exemplary embodiments of the invention has been presented only for the purposes of illustration and description, and is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching.
The embodiments were chosen and described to explain the principles of the invention and their practical application to enable others skilled in the art to utilize the invention and various embodiments and with various modifications suited to the particular use contemplated. Alternative embodiments will become apparent to those skilled in the art to which the invention pertains without departing from its spirit and scope. Accordingly, the scope of the invention is defined by the appended claims rather than the foregoing description and the exemplary embodiments described therein.
Some references, which may include patents, patent applications, and various publications, are cited and discussed in the description of this invention. The citation and/or discussion of such references is provided merely to clarify the description of the invention and is not an admission that any such reference is “prior art” to the invention described herein. All references cited and discussed in this specification are incorporated herein by reference in their entireties and to the same extent as if each reference was individually incorporated by reference.
LIST OF REFERENCES
[1], Dombeck, D.A., Khabbaz, A.N., Collman, F., Adelman, T.L., and Tank, D.W. (2007).
Imaging large-scale neural activity with cellular resolution in awake, mobile mice. Neuron 56, 43-57.
[2], Sofroniew, N.J., Flickinger, D., King, J., and Svoboda, K. (2016). A large field of view two-photon mesoscope with subcellular resolution for in vivo imaging. Elife 5. 10.7554/eLife.14472.
[3], Stringer, C., Pachitariu, M., Steinmetz, N., Carandini, M., and Harris, K.D. (2019). Highdimensional geometry of population responses in visual cortex. Nature 571, 361-365. 10.1038/s41586-019-1346-5.
[4], Yu, C.H., Stirman, J.N., Yu, Y., Hira, R., and Smith, S.L. (2021). Diesel2p mesoscope with dual independent scan engines for flexible capture of dynamics in distributed neural circuitry. Nat Commun 12, 6639. 10.1038/s41467-021-26736-4.
[5], Petersen, C.C., Hahn, T.T., Mehta, M., Grinvald, A., and Sakmann, B. (2003). Interaction of sensory responses with spontaneous depolarization in layer 2/3 barrel cortex. Proc Natl Acad Sci U S A 100, 13638-13643. 10.1073/pnas.2235811100.
[6], Margrie, T.W., Brecht, M., and Sakmann, B. (2002). In vivo, low-resistance, whole-cell recordings from neurons in the anaesthetized and awake mammalian brain. Pflugers Arch 444, 491-498. 10.1007/s00424-002-0831-z.
[7], Yu, J., Gutnisky, D.A., Hires, S.A., and Svoboda, K. (2016). Layer 4 fast-spiking interneurons filter thalamocortical signals during active somatosensation. Nat Neurosci 19, 1647-1657. 10.1038/nn.4412.
[8], Smith, S.L., Smith, I.T., Branco, T., and Hausser, M. (2013). Dendritic spikes enhance stimulus selectivity in cortical neurons in vivo. Nature 503, 115-120.
10.1038/nature 12600.
[9], Jia, X., Siegle, J.H., Durand, S., Heller, G., Ramirez, T.K., Koch, C., and Olsen, S.R.
(2022). Multi -regional module-based signal transmission in mouse visual cortex. Neuron 110, 1585-1598 el589. 10.1016/j.neuron.2022.01.027.
[10], Jun, J. J., Steinmetz, N.A., Siegle, J.H., Denman, D.J., Bauza, M., Barbarits, B., Lee,
A.K., Anastassiou, C.A., Andrei, A., Aydin, C., et al. (2017). Fully integrated silicon probes for high-density recording of neural activity. Nature 551, 232-236. 10.1038/nature24636.
[11], Cohen, J.D., Bolstad, M., and Lee, A.K. (2017). Experience-dependent shaping of hippocampal CAI intracellular activity in novel and familiar environments. Elife 6. 10.7554/eLife.23040.
[12], Dombeck, D.A., Harvey, C.D., Tian, L., Looger, L.L., and Tank, D.W. (2010).
Functional imaging of hippocampal place cells at cellular resolution during virtual navigation. Nature neuroscience 13, 1433-1440.
[13], Harvey, C.D., Collman, F., Dombeck, D.A., and Tank, D.W. (2009). Intracellular dynamics of hippocampal place cells during virtual navigation. Nature 461, 941-946. 10.1038/nature08499.
[14], Campbell, M.G., Ocko, S.A., Mallory, C.S., Low, I.I.C., Ganguli, S., and Giocomo, L.M.
(2018). Principles governing the integration of landmark and self-motion cues in entorhinal cortical codes for navigation. Nat Neurosci 21, 1096-1106. 10.1038/s41593- 018-0189-y.
[15], Gauthier, J.L., and Tank, D.W. (2018). A Dedicated Population for Reward Coding in the
Hippocampus. Neuron 99, 179-193 e!77. 10.1016/j. neuron.2018.06.008.
[16], Harvey, C.D., Coen, P., and Tank, D.W. (2012). Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature 484, 62-68. 10.1038/naturel0918.
[17], Pinto, L., Rajan, K., DePasquale, B., Thiberge, S.Y., Tank, D.W., and Brody, C.D.
(2019). Task-Dependent Changes in the Large-Scale Dynamics and Necessity of Cortical Regions. Neuron 104, 810-824 e819. 10.1016/j. neuron.2019.08.025.
[18], Scholl, B., Burge, J., and Priebe, N.J. (2013). Binocular integration and disparity selectivity in mouse primary visual cortex. J Neurophysiol 109, 3013-3024. 10.1152/jn.01021.2012.
[19], Boone, H.C., Samonds, J.M., Crouse, E.C., Barr, C., Priebe, N.J., and McGee, A.W.
(2021). Natural binocular depth discrimination behavior in mice explained by visual cortical activity. Curr Biol 31, 2191-2198 e2193. 10.1016/j. cub.2021.02.031.
[20], Holmgren, C.D., Stahr, P., Wallace, D.J., Voit, K.M., Matheson, E.J., Sawinski, J.,
Bassetto, G., and Kerr, J.N. (2021). Visual pursuit behavior in mice maintains the pursued prey on the retinal region with least optic flow. Elife 10. 10.7554/eLife.70838.
[21], Heys, J.G., Rangarajan, K.V., and Dombeck, D.A. (2014). The functional microorganization of grid cells revealed by cellular-resolution imaging. Neuron 84, 1079-1090. 10.1016/j. neuron.2014.10.048.
[22], Wallace, D.J., Greenberg, D.S., Sawinski, J., Rulla, S., Notaro, G., and Kerr, J.N. (2013).
Rats maintain an overhead binocular field at the expense of constant fusion. Nature 498, 65-69. 10.1038/naturel2153.
[23], Yilmaz, M., and Meister, M. (2013). Rapid innate defensive responses of mice to looming visual stimuli. Curr Biol 23, 2011-2015. 10.1016/j. cub.2013.08.015. [24]. Ravassard, P., Kees, A., Willers, B., Ho, D., Aharoni, D.A., Cushman, J., Aghajan, Z.M., and Mehta, M.R. (2013). Multisensory control of hippocampal spatiotemporal selectivity. Science (New York, N.Y.) 340, 1342-1346. 10.1126/science.1232655.
[25], Minderer, M., Harvey, C.D., Donato, F., and Moser, E.I. (2016). Neuroscience: Virtual reality explored. Nature 533, 324-325. 10.1038/naturel7899.
[26], Bollu, T., Whitehead, S.C., Prasad, N., Walker, J., Shyamkumar, N., Subramaniam, R.,
Kardon, B., Cohen, I., and Goldberg, J.H. (2019). Automated home cage training of mice in a hold-still center-out reach task. J Neurophysiol 121, 500-512.
10.1152/jn.00667.2018.
[27], Poddar, R., Kawai, R., and Olveczky, B.P. (2013). A fully automated high-throughput training system for rodents. PLoS One 8, e83171. 10.1371/joumal. pone.0083171.
[28], Ding, X.Q., Tan, J.Z., Meng, J., Shao, Y.L., Shen, M.X., and Dai, C.X. (2023). Time-
Serial Evaluation of the Development and Treatment of Myopia in Mice Eyes Using OCT and ZEMAX. Diagnostics 13. ARTN 37910.3390/diagnosticsl3030379.
[29], Gardner, M.R., Katta, N., Rahman, A.S., Rylander, H.G., and Milner, T.E. (2018).
Design Considerations for Murine Retinal Imaging Using Scattering Angle Resolved Optical Coherence Tomography. Appl Sci-Basel 8. ARTN 215910.3390/app8112159.
[30], Wong, A.A., and Brown, R.E. (2006). Visual detection, pattern discrimination and visual acuity in 14 strains of mice. Genes Brain Behav 5, 389-403. 10.1111/j .1601- 183X.2005.00173.x.
[31], Sterratt, D.C., Lyngholm, D., Willshaw, D.J., and Thompson, I.D. (2013). Standard anatomical and visual space for the mouse retina: computational reconstruction and transformation of flattened retinae with the Retistruct package. PLoS Comput Biol 9, e 1002921. 10.1371/j ournal .pcbi .1002921.
[32], Sheffield, M.E.J., Adoff, M.D., and Dombeck, D.A. (2017). Increased Prevalence of
Calcium Transients across the Dendritic Arbor during Place Field Formation. Neuron 96, 490-504 e495. 10.1016/j.neuron.2017.09.029.
[33], Radvansky, B.A., and Dombeck, D.A. (2018). An olfactory virtual reality system for mice. Nat Commun 9, 839. 10.1038/s41467-018-03262-4.
[34], Pettit, N.L., Yuan, X.C., and Harvey, C.D. (2022). Hippocampal place codes are gated by behavioral engagement. Nat Neurosci 25, 561-566. 10.1038/s41593-022-01050-4.
[35], Radvansky, B.A., Oh, J.Y., Climer, J.R., and Dombeck, D.A. (2021). Behavior determines the hippocampal spatial mapping of a multisensory environment. Cell Rep 36, 109444. 10.1016/j . celrep .2021.109444.
[36], Solomon, S.G., Janbon, H., Bimson, A., and Wheatcroft, T. (2023). Visual spatial location influences selection of instinctive behaviours in mouse. R Soc Open Sci 10, 230034. 10.1098/rsos.230034.
[37], Shang, C., Chen, Z., Liu, A., Li, Y., Zhang, J., Qu, B., Yan, F., Zhang, Y., Liu, W., Liu,
Z., et al. (2018). Divergent midbrain circuits orchestrate escape and freezing responses to looming stimuli in mice. Nat Commun 9, 1232. 10.1038/s41467-018-03580-7.
[38], Muller, R.U., and Kubie, J.L. (1987). The effects of changes in the environment on the spatial firing of hippocampal complex-spike cells. J Neurosci 7, 1951-1968.
10.1523/JNEUROSCI.07-07-01951.1987.
[39], Leutgeb, S., Leutgeb, J.K., Barnes, C.A., Moser, E.I., McNaughton, B.L., and Moser,
M.B. (2005). Independent codes for spatial and episodic memory in hippocampal neuronal ensembles. Science (New York, N.Y.) 309, 619-623. 10.1126/science.1114037.
[40], Dong, C., Madar, A.D., and Sheffield, M.E.J. (2021). Distinct place cell dynamics in
CAI and CA3 encode experience in new environments. Nat Commun 12, 2977.
10.1038/s41467-021 -23260-3.
[41], Ziv, Y., Burns, L.D., Cocker, E.D., Hamel, E.O., Ghosh, K.K., Kitch, L.J., El Gamal, A., and Schnitzer, M.J. (2013). Long-term dynamics of CAI hippocampal place codes. Nat Neurosci 16, 264-266. 10.1038/nn.3329.
[42], Kaufman, A.M., Geiller, T., and Losonczy, A. (2020). A Role for the Locus Coeruleus in
Hippocampal CAI Place Cell Reorganization during Spatial Reward Learning. Neuron 105, 1018-1026 el014. 10.1016/j.neuron.2019.12.029.
[43], McNaughton, B.L., Barnes, C.A., and O'Keefe, J. (1983). The contributions of position, direction, and velocity to single unit activity in the hippocampus of freely-moving rats. Exp Brain Res 52, 41-49. 10.1007/BF00237147.
[44], Nakazawa, K., Sun, L.D., Quirk, M.C., Rondi-Reig, L., Wilson, M.A., and Tonegawa, S.
(2003). Hippocampal CA3 NMDA receptors are crucial for memory acquisition of onetime experience. Neuron 38, 305-315. 10.1016/s0896-6273(03)00165-x.
[45], Juavinett, A.L., Bekheet, G., and Churchland, A.K. (2019). Chronically implanted
Neuropixels probes enable high-yield recordings in freely moving mice. Elife 8. 10.7554/eLife.47188. [46], Zong, W., Obenhaus, H.A., Skytoen, E.R., Eneqvist, EL, de Jong, N.L., Vale, R., Jorge,
M.R., Moser, M.B., and Moser, E.I. (2022). Large-scale two-photon calcium imaging in freely moving mice. Cell 185, 1240-1256 el230. 10.1016/j.cell.2022.02.017.
[47], Voigts, J., Newman, J.P., Wilson, M.A., and Harnett, M.T. (2020). An easy-to-assemble, robust, and lightweight drive implant for chronic tetrode recordings in freely moving animals. J Neural Eng 17, 026044. 10.1088/1741-2552/ab77f9.
[48], Johnson, K.P., Fitzpatrick, M.J., Zhao, L., Wang, B., McCracken, S., Williams, P.R., and
Kerschensteiner, D. (2021). Cell-type-specific binocular vision guides predation in mice. Neuron 109, 1527-1539 el524. 10.1016/j.neuron.2021.03.010.
[49], Sit, K.K., and Goard, M.J. (2023). Coregistration of heading to visual cues in retrosplenial cortex. Nat Commun 14, 1992. 10.1038/s41467-023-37704-5.
[50], Klioutchnikov, A., Wallace, D.J., Sawinski, J., Voit, K.M., Groemping, Y., and Kerr,
J.N.D. (2023). A three-photon head-mounted microscope for imaging all layers of visual cortex in freely moving mice. Nat Methods 20, 610-616. 10.1038/s41592-022-01688-9.
[51], Chen, G., King, J.A., Lu, Y., Cacucci, F., and Burgess, N. (2018). Spatial cell firing during virtual navigation of open arenas by head-restrained mice. Elife 7.
10.7554/eLife.34789.
[52], Voigts, J., and Harnett, M.T. (2020). Somatic and Dendritic Encoding of Spatial
Variables in Retrosplenial Cortex Differs during 2D Navigation. Neuron 105, 237-245 e234. 10.1016/j. neuron.2019.10.016.
[53], McGinley, M.J., David, S.V., and McCormick, D.A. (2015). Cortical Membrane
Potential Signature of Optimal States for Sensory Signal Detection. Neuron 87, 179-192. 10.1016/j. neuron.2015.05.038.
[54], Meyer, A.F., O'Keefe, J., and Poort, J. (2020). Two Distinct Types of Eye-Head Coupling in Freely Moving Mice. Curr Biol 30, 2116-2130 e2116. 10.1016/j. cub.2020.04.042.
[55], Runyan, C.A., Piasini, E., Panzeri, S., and Harvey, C.D. (2017). Distinct timescales of population coding across cortex. Nature 548, 92-96. 10.1038/nature23020.
[56], Gao, S., Webb, J., Mridha, Z., Banta, A., Kemere, C., and McGinley, M. (2020). Novel
Virtual Reality System for Auditory Tasks in Head-fixed Mice. Annu Int Conf IEEE Eng Med Biol Soc 2020, 2925-2928. 10.1109/EMBC44109.2020.9176536.
[57], Sofroniew, N.J., Cohen, J.D., Lee, A.K., and Svoboda, K. (2014). Natural whisker-guided behavior by head-fixed mice in tactile virtual reality. J Neurosci 34, 9537-9550. 10.1523/JNEUROSCI.0712-14.2014.
[58], Hafting, T., Fyhn, M., Molden, S., Moser, M.B., and Moser, E.I. (2005). Microstructure of a spatial map in the entorhinal cortex. Nature 436, 801-806. 10.1038/nature03721.
[59], O'Keefe, J., and Dostrovsky, J. (1971). The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain research 34, 171-175.
[60], Zhang, Y., Rozsa, M., Liang, Y., Bushey, D., Wei, Z., Zheng, J., Reep, D., Broussard,
G.J., Tsang, A., Tsegaye, G., et al. (2023). Fast and sensitive GCaMP calcium indicators for imaging neural populations. Nature 615, 884-891. 10.1038/s41586-023-05828-9.
[61], Dombeck, D.A., Harvey, C.D., Tian, L., Looger, L.L., and Tank, D.W. (2010).
Functional imaging of hippocampal place cells at cellular resolution during virtual navigation. Nat Neurosci 13, 1433-1440. 10.1038/nn.2648.
[62], Puk, O., Dalke, C., Favor, J., de Angelis, M.H., and Graw, J. (2006). Variations of eye size parameters among different strains of mice. Mamm Genome 17, 851-857. 10.1007/s00335-006-0019-5.
[63], Scholl, B., Burge, J., and Priebe, N.J. (2013). Binocular integration and disparity selectivity in mouse primary visual cortex. Journal of Neurophysiology 109, 3013-3024. 10.1152/jn.01021.2012.
[64], Boone, H.C., Samonds, J.M., Crouse, E.C., Barr, C., Priebe, N.J., and McGee, A.W.
(2021). Natural binocular depth discrimination behavior in mice explained by visual cortical activity. Current Biology 31, 2191-2198. e3. 10.1016/j.cub.2021.02.031.
[65], Holmgren, C.D., Stahr, P., Wallace, D.J., Voit, K.-M., Matheson, E.J., Sawinski, J.,
Bassetto, G., and Kerr, J.N. (2021). Visual pursuit behavior in mice maintains the pursued prey on the retinal region with least optic flow. eLife 10, e70838. 10.7554/eLife.70838.
[66], Zhang, P., Mocci, J., Wahl, D.J., Meleppat, R.K., Manna, S.K., Quintavalla, M.,
Muradore, R., Sarunic, M.V., Bonora, S., Pugh, E.N., et al. (2018). Effect of a contact lens on mouse retinal in vivo imaging: Effective focal length changes and monochromatic aberrations. Experimental Eye Research 172, 86-93. 10.1016/j.exer.2018.03.027.
[67], Sterratt, D.C., Lyngholm, D., Willshaw, D.J., and Thompson, I.D. (2013). Standard
Anatomical and Visual Space for the Mouse Retina: Computational Reconstruction and Transformation of Flattened Retinae with the Retistruct Package. PLOS Computational Biology 9, el002921. 10.1371/journal.pcbi.1002921. [68], Heys, J.G., Rangarajan, K.V., and Dombeck, D.A. (2014). The Functional Microorganization of Grid Cells Revealed by Cellular-Resolution Imaging. Neuron 84, 1079— 1090. 10.1016/j. neuron.2014.10.048.
[69], Sheffield, M E. J., Adoff, M.D., and Dombeck, D.A. (2017). Increased Prevalence of
Calcium Transients across the Dendritic Arbor during Place Field Formation. Neuron 96, 490-504. e5. 10.1016/j.neuron.2017.09.029.
[70], Yilmaz, M., and Meister, M. (2013). Rapid Innate Defensive Responses of Mice to
Looming Visual Stimuli. Current Biology 23, 2011-2015. 10.1016/j. cub.2013.08.015.
[71], Guizar-Sicairos, M., Thurman, S.T., and Fienup, J.R. (2008). Efficient subpixel image registration algorithms. Opt. Lett., OL 33, 156-158. 10.1364/OL.33.000156.
[72], Pachitariu, M., Stringer, C., Dipoppa, M., Schroder, S., Rossi, L.F., Dalgleish, H.,
Carandini, M., and Harris, K.D. (2017). Suite2p: beyond 10,000 neurons with standard two-photon microscopy. 061507. 10.1101/061507.
[73], Friedrich, J., Zhou, P., and Paninski, L. (2017). Fast online deconvolution of calcium imaging data. PLOS Computational Biology 13, el005423.
10.1371/journal.pcbi.1005423.
[74], Climer, J.R., and Dombeck, D.A. (2021). Information theoretic approaches to deciphering the neural code with functional fluorescence imaging. eNeuro.
10.1523/ENEURO .0266-21.2021.
[75], Etter, G., Manseau, F., and Williams, S. (2020). A Probabilistic Framework for Decoding
Behavior From in vivo Calcium Imaging Data. Frontiers in Neural Circuits 14, 19.
10.3389/fncir.2020.00019.
[76], Dobos G. et al., Virtual reality simulator and method for small laboratory animals, PCT
Patent Application Publication No. W02021009526A1, January 21, 2021.
[77], Company selling current VR systems, Phenosys: https://www.phenosys.com/products/ virtual-reality/?gclid=CjwKCAjwoqGnBhAcEiwAwK- OkTevdS2X6arynkGbZEtdZvMYD 1 JR Mp0esQBdjAjzIvzrm69p3Vc23BoCpcoQAvD_BwE&cn-reloaded=l.
[78], Princeton Invention Disclosure on mouse VR: https : //puotl .technolo ypublisher.com/tec hnology/8372.
[79], Isaacson et al., Society for Neuroscience meeting abstract: "A fast and immersive headset-based VR system for mouse neuroscience and behavior" (https://www.abstractsonl ine.com/pp8/#!/10619/presentation/77506).
[80], Pinke et al., Society for Neuroscience meeting abstract: "Two-photon microscope compatible immersive virtual reality system for mice" (https://www.abstractsonline.eom/pp8/# ! /7883 /presentati on/ 69345).

Claims

CLAIMS What is claimed is:
1. A virtual reality (VR) system, comprising: a pair of concave lenses; and a pair of screens, arranged in relation to the pair of concave lenses and eyes of a subject, for fully illuminating a visual field of view (FOV) of the subject.
2. The virtual reality system of claim 1, being configured to image objects displayed on the screen onto the eye retinas of the subject through the concave lens.
3. The virtual reality system of claim 1, being configured to illuminate each eye with an about 180-degree field of view in all directions.
4. The virtual reality system of claim 3, being configured to separately illuminate each eye for stereo illumination of the binocular zone, thereby excluding a lab frame from view while also accommodating saccades.
5. The virtual reality system of claim 3, wherein the about 180-degree field of view includes about 140 degrees for each eye FOV and +/- 20 degrees for saccades.
6. The virtual reality system of claim 1, wherein each concave lens is a positive-meniscus lens having an inner surface operably facing an eye of the subject.
7. The virtual reality system of claim 6, wherein each concave lens is arranged such that each eye is centered at a predetermined distance from an inner surface of each positivemeniscus lens.
8. The virtual reality system of claim 1, wherein each screen is a curved illumination display.
9. The virtual reality system of claim 8, wherein each curved screen is a flexible light- emitting diode (LED) display.
10. The virtual reality system of claim 1, wherein the virtual reality system provides a mean resolution of about 2.2 pixel s/degree, or better, across the about 180-degree FOV.
11. The virtual reality system of claim 10, wherein each curved screen is a high resolution organic light-emitting diode (OLED) screen configured to increase the pixels/degree to further exceed the acuity of mice or by incorporating other sensory modalities including olfactory auditory and tactile into the virtual simulation, so as to further increase the immersiveness of the VR experience for the subject.
12. The virtual reality system of claim 1, being compatible with two-photon functional microscopy.
13. The virtual reality system of claim 12, wherein the virtual reality system is configured to allow one to place it under an upright two-photon microscope, providing a full FOV, including the overhead visual region, while imaging.
14. The virtual reality system of claim 13, wherein a custom shielding system is provided to fit around the objective and connected to a ring on the head of the subject, in order to block light from the illuminated screens from being detected by the microscope’s photodetectors.
15. The virtual reality system of claim 13, wherein, by combining the virtual reality system with two-photon functional imaging, the virtual reality system is usable to establish the existence of large populations of place cells in the hippocampus during virtual navigation, global remapping during an environment change, and the first descriptions of the response of place cells ensembles to overhead looming stimulation.
16. The virtual reality system of claim 13, being usable for studying neural coding properties of visual behaviors that utilize the large overhead binocular region thought to play a critical role in many rodent behaviors.
17. The virtual reality system of claim 1, further comprising at least one separate, but compatible, optical path for a camera to monitor eye movements and pupil size.
18. The virtual reality system of claim 1, further comprising prism mirrors for reorienting the placement of the screens relative to the head of the subject, thereby affording more access to certain brain regions.
19. The virtual reality system of claim 1, further comprising a data acquisition module for electrical recordings of data and data processing.
20. The virtual reality system of claim 1, being compatible with other VR approaches that rotate the animal in conjunction with movements through the virtual space to activate the vestibular system.
21. The virtual reality system of claim 1, being wearable by a freely moving subject.
22. The virtual reality system of claim 1, being usable for augmented visual reality paradigms in which the other senses, as well as self-motion cues, are preserved.
23. The virtual reality system of claim 1, being a miniature rodent stereo illumination VR (iMRSIV) system that is at least about 10 times smaller than existing VR systems.
24. The virtual reality system of any one of claims 1-23, further comprising: a pair of screen holders, each screen holder with a curvature matching that of the screen, to which the screen is affixed; and a pair of lens holders, each lens holder with the lens attached to one end and the other end mated to the screen holder such that the lens is centered at the desired distance from the screen.
25. The virtual reality system of claim 24, wherein each lens holder is mated to the screen holder with magnets.
26. The virtual reality system of claim 24, wherein an optical axis of the virtual reality system is aligned with the optical axis of the eye of the subject e at its resting position.
27. The virtual reality system of claim 24, wherein when aligned, the lens holder is in a desired position with the eye centered at an about 1 mm distance from an inner surface of each positive-meniscus lens.
28. The virtual reality system of claim 24, further comprising an eye tracking module configured to determine position and orientation of the eye of the subject for aligning each eye to a proper location with respect to the virtual reality system and measuring the orientation of the eye, once aligned.
29. The virtual reality system of claim 28, wherein the eye tracking module comprises an infrared (IR) illumination, an IR mirror, and an IR camera positioned in an eye tracking path in relation to the eye of the subject.
30. The virtual reality system of claim 29, wherein the IR camera comprises CCD (charge- coupled device) and/or CMOS (complementary metal-oxide-semiconductor) sensors.
31. The virtual reality system of claim 29, wherein the IR illumination comprises one or more IR LEDs emitting IR light with a wavelength of about 850 nm.
32. The virtual reality system of claim 29, wherein a dichroic film is provided to cover the display to act as the IR mirror in the eye tracking path while transmitting the VR visible scene.
33. The virtual reality system of claim 32, wherein the dichroic film is an IR reflective and visible passing film.
34. The virtual reality system of claim 29, wherein the eye tracking module further comprises a visible light filter positioned in the eye tracking path in the front of the camera to block the visible light from the camera and passing the IR light.
PCT/US2024/0573912023-11-282024-11-26Full field of view virtual reality gogglesPendingWO2025117501A1 (en)

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
US202363603208P2023-11-282023-11-28
US63/603,2082023-11-28

Publications (1)

Publication NumberPublication Date
WO2025117501A1true WO2025117501A1 (en)2025-06-05

Family

ID=95898153

Family Applications (1)

Application NumberTitlePriority DateFiling Date
PCT/US2024/057391PendingWO2025117501A1 (en)2023-11-282024-11-26Full field of view virtual reality goggles

Country Status (1)

CountryLink
WO (1)WO2025117501A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20160341966A1 (en)*2015-05-192016-11-24Samsung Electronics Co., Ltd.Packaging box as inbuilt virtual reality display
KR102032980B1 (en)*2017-12-192019-10-16주식회사 브이알이지이노베이션Foldable Virtual Reality Viewer
US20210208357A1 (en)*2019-08-282021-07-08Lg Electronics Inc.Electronic device
US20220014224A1 (en)*2018-11-132022-01-13Vr Coaster Gmh & Co. KgUnderwater vr headset
US20230004008A1 (en)*2015-03-162023-01-05Magic Leap, Inc.Augmented and virtual reality display systems and methods for diagnosing health conditions based on visual fields

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20230004008A1 (en)*2015-03-162023-01-05Magic Leap, Inc.Augmented and virtual reality display systems and methods for diagnosing health conditions based on visual fields
US20160341966A1 (en)*2015-05-192016-11-24Samsung Electronics Co., Ltd.Packaging box as inbuilt virtual reality display
KR102032980B1 (en)*2017-12-192019-10-16주식회사 브이알이지이노베이션Foldable Virtual Reality Viewer
US20220014224A1 (en)*2018-11-132022-01-13Vr Coaster Gmh & Co. KgUnderwater vr headset
US20210208357A1 (en)*2019-08-282021-07-08Lg Electronics Inc.Electronic device

Similar Documents

PublicationPublication DateTitle
US20090295683A1 (en)Head mounted display with variable focal length lens
EP2805671B1 (en)Ocular videography system
US10660518B2 (en)System and method for visualization of ocular anatomy
US20090073386A1 (en)Enhanced head mounted display
US6830336B2 (en)Automated generation of fundus images based on processing of acquired images
US20140226128A1 (en)Methods and Apparatus for Retinal Imaging
JP2023026672A (en)Visual inspection apparatus and visual inspection method
Wu et al.High-resolution eye-tracking via digital imaging of Purkinje reflections
Zoccolan et al.A self-calibrating, camera-based eye tracker for the recording of rodent eye movements
Pinke et al.Full field-of-view virtual reality goggles for mice
van Rheede et al.Simulating prosthetic vision: Optimizing the information content of a limited visual display
US12239378B2 (en)Systems, methods, and apparatuses for eye imaging, screening, monitoring, and diagnosis
CN105431076A (en) sight guide
Dagnelie et al.Toward an artificial eye
Fortenbaugh et al.Losing sight of the bigger picture: Peripheral field loss compresses representations of space
CN113080836A (en)Non-center gazing visual detection and visual training equipment
CN109303544B (en) A multi-scale mixed vision impairment analyzer and its analysis method
WO2014127134A1 (en)Methods and apparatus for retinal imaging
TWI819654B (en)Systems for improving vision of a viewer&#39;s eye with impaired retina
US20060209257A1 (en)Integral viewing and eye imaging system for visual projection systems
WO2020240867A1 (en)Image processing method, image processing device, and image processing program
WO2025117501A1 (en)Full field of view virtual reality goggles
US20240049963A1 (en)Cover-Uncover Test in a VR/AR Headset
Isaacson et al.MouseGoggles: an immersive virtual reality headset for mouse neuroscience and behavior
Rizzo et al.The effect of bilateral visual cortex lesions on the development of eye movements and perception

Legal Events

DateCodeTitleDescription
121Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number:24898615

Country of ref document:EP

Kind code of ref document:A1


[8]ページ先頭

©2009-2025 Movatter.jp