Workflow Images:
Step Description Equipment Data collected 1 Acclimation in holding room prior to testing (1 week) 2 Acclimation in behavior room (30-60 minutes) 3 Open field test Open field arena, video camera Top down video at 480x480 pixels frame 4 Pose estimation HRNet-W32 architecture neural network Each 480x480 frame of video generates 12 480x480 heatmaps (one heatmap per key point). The argmax of each of the 12 heatmaps produces 12 (x, y) coordinates 5 Gait extraction Angular velocity, Stride speed, Limb duty factor, Temporal symmetry, Step length, Step width, Stride length, Lateral displacement of nose, Lateral displacement of base of tail, Lateral displacement of tip of tail
General Information for All Procedures: All behavioral procedures have been published by Geuther et al., 2019, and Kumar et al., 2011. The strain survey data was published before and reanalyzed for gait behavior (Geuther et al., 2019, 2021).
Procedure 1: HRNet-W32 model architecture, neural network training and open field test
Definitions & Abbreviations: HRNet-W32 # High-Resolution Net-W32 model for pose estimation and gait analysis.
Equipment, software, and supplies
- Open field arena, each arena measures 52 x 52 x 23 cm. The floor is white PVC plastic and the walls are gray PVC plastic. In order to facilitate cleaning and maintenance, a white 2.54 cm chamfer is added to all the inner edges.
- Video camera
Environmental Conditions
Testing Area
Facility: Behavioral testing room
Acclimation Period: 30-60 minutes prior to testing
Light Source: LED ring light, model: FV R300
Light Intensity: 600 lux per arena
Temperature (°C): Not specified
Acoustic Background: White noise
Steps
Introductory Comments: The base of the toolkit is a HRNet-W32 neural network that has been trained to perform pose estimation on top-down video of an open field. The network provides 12 two-dimensional locations of mouse anatomical location, or 'key points', for each frame of video that describe the pose of the mouse at each time point.
- The smaller HRNet-W32 architecture is selected as it was shown to provide significant speed and memory improvements for only a small reduction in accuracy compared to HRNet-W48. Two 5x5 transpose convolutions are added to the head of the network to match the heatmap output resolution with the resolution of the video input. All experiments used a single mouse in an open field scenario. Instead of performing pose estimation after object detection, the full resolution key point heatmaps are used to infer the posture of a single mouse at every frame. Therefore, each 480x480 frame of video is used to generate twelve 480x480 heatmaps, one heatmap per key point. The maximum value in each heatmap represents the highest confidence location for each respective point. After taking the argmax of each of the 12 heatmaps, 12 (x,y) coordinates are obtained that represented the animal's pose at that particular frame. For details, see the figure.
- Approximately, 8000 images are labeled across a diverse set of strains to train a network that operates across 62 mouse strains with high visual diversity.
Additional ARRIVE Guidelines
Data Inclusion & Exclusion Criteria: Animals that had too few strides are excluded from the study. This disproportionally affected low activity strains.
Procedure 2: Gait extraction
Steps
Introductory Comments: The method of extracting gait structure from pose video frames are described. The ANOVA F-test is used to determine gait metrics.
- The process of detecting strides begins with first determining intervals of time when the mouse is moving at a sufficient speed for strides to take place. These are known as tracks. Track intervals are determined by observing that the base of tail key point is a good indicator for overall mouse speed, and that a base of tail speed of ≥ 5 cm/s was found to be a desirable threshold indicator.
- The next step is to identify individual steps in both left hind paw and right hand paw. At this stage, these steps are determined for each paw respectively without consideration for the other paw. The processes for step detection relies on oscillations in speed, and individual paw speed are calculated, following which a peak detection algorithm is applied to detect local maxima in speed. After finding all the local maxima, the surrounding local minima are used to define a step interval, with a toe-off event followed by a foot strike event on either side of the step. The next stage is to filter out any steps whose peak speed does not exceed 15 cm/s or the overall animal speed, whichever is greater.
- Once the set of valid steps from left and right hind paws are defined, groups of steps are paired together to find strides; with left hind paw steps being used to delimit strides. Stride intervals end when the left hind paw step ends, and begins at the frame just after the previous stride; and with an additional constraint that stride intervals are not allowed to extend before or after the containing track. Having defined a left hind paw step as a proxy for stride intervals, the right paw step that completes within the stride interval is used, and that step is then said to be associated with the stride. Right hind steps that do not fulfill this condition leads to those stride being discarded from the analysis.
- Having defined strides and associated steps, further filtering is implemented to improve the quality and consistency of strides. This includes removing all strides at the start and end of a track in order to increase the consistency of gait metrics and avoid introducing variance due to starting and stopping behavior. Furthermore, strides are also discarded if the key point confidence for Nose, Neck Base, Spine Center, Tail Base, Hind Paw Left, Hind Paw Right, Tail Middle or Tail Tip falls ≤ 0.3 for any frame in the stride, again to avoid using low quality strides in the analysis.
Primary References
Sheppard K, Gardin J, Sabnis GS, Peer A, Darrell M, Deats S, Geuther B, Lutz CM, Kumar V. Stride-level analysis of mouse open field behavior using deep-learning-based pose estimation. Cell Rep. 2022 Jan 11;38(2):110231. doi: 10.1016/j.celrep.2021.110231.
PubMed 35021077 Protocol References
Geuther BQ, Deats SP, Fox KJ, Murray SA, Braun RE, White JK, Chesler EJ, Lutz CM, Kumar V. Robust mouse tracking in complex environments using neural networks. Commun Biol. 2019 Mar 29;2:124. doi: 10.1038/s42003-019-0362-eCollection 2019.
PubMed 30937403 FullTextGeuther BQ, Peer A, He H, Sabnis G, Philip VM, Kumar V. Action detection using a neural network elucidates the genetics of mouse grooming behavior. Elife. pii: e63207. doi: 10.7554/eLife.63207.
PubMed 33729153 FullTextKumar V, Kim K, Joseph C, Thomas LC, Hong H, Takahashi JS. Second-generation high-throughput forward genetic screen in mice to isolate subtle behavioral mutants. Proc Natl Acad Sci U S A. 2011 Sep 13;108 Suppl 3:15557-64. doi: 10.1073/pnas.1107726108. Epub 2011 Sep 6.
PubMed 21896739 FullText