KineWheel–DeepLabCut Automated Paw Annotation Using

KineWheel–DeepLabCut Automated Paw Annotation Using

Revolutionizing Rodent Pose Tracking Through Automated Annotation

Uncovering the relationships between neural circuits, behavior, and neurological dysfunction is a crucial pursuit in neuroscience research. While open-source toolkits like DeepLabCut have transformed the field of markerless pose estimation using deep neural networks, the training process still requires significant human intervention for annotating key points of interest in video data.

To streamline this process and reduce the need for manual labor, researchers have developed a novel method that automatically generates annotated image datasets of rodent paw placement in laboratory settings. This innovative approach leverages invisible but fluorescent markers that become temporarily visible under UV light, enabling a marker-based pose estimation system to deterministically map paw locations during the data acquisition phase.

Through stroboscopic alternating illumination, adjacent video frames are captured with either UV or white light exposure. By filtering the UV-exposed frames, the distinct paw markers are identified, and their positions are transferred to automatically annotate the corresponding paw locations in the adjacent white light-exposed frames. These automatically labeled frames are then used to train a deep neural network for markerless pose estimation, eliminating the need for manual annotation.

This automated approach, made available open-source, has been successfully implemented using a KineWheel-DeepLabCut setup, demonstrating its effectiveness in the markerless tracking of a mouse’s four paws as it runs on a transparent wheel with an integrated mirror. The results show that the model trained on automatically annotated data performs comparably to the one trained on manual annotations, while significantly reducing the time and effort required for dataset creation.

The Importance of Automated Rodent Pose Tracking

Accurate, efficient, and automated methods for rodent pose tracking are crucial in neuroscience research, as they enable a deeper understanding of the complex relationships between neural circuits, behavior, and neurological disorders. Traditional approaches often rely on time-consuming manual annotations, which are prone to human bias and inconsistencies.

In contrast, automated methods utilizing deep learning have revolutionized the field, allowing researchers to track rodent behavior with minimal training data and effort, while achieving accuracy levels comparable to human performance. These advancements have opened up new possibilities for gaining insights into how animals move and interact, which can be directly linked to the function and organization of neural circuits.

Furthermore, the high-throughput collection and analysis of behavioral data facilitated by these automated methods have the potential to drive new discoveries in neuroscience, including the understanding of neural dysfunction and the development of treatments for neurological disorders.

Overcoming the Limitations of Marker-Based Pose Estimation

While marker-based pose estimation has been a classic approach in biological research, it often faces challenges, such as the invasive nature of attaching markers to animals, which can interfere with their natural movement or behavior. Additionally, markers can wear off, degrade, or become obscured over time, leading to inconsistencies in the data.

The method presented in this article addresses these limitations by using the marker-based approach only during the data acquisition phase, where the automatically generated annotations are used to train a deep neural network. Once the model is trained, the inference phase can be performed using markerless pose estimation, without the need for any markers or UV lighting.

This approach effectively combines the advantages of both marker-based and markerless methods, offering a streamlined and efficient workflow for rodent pose tracking. By automating the annotation process, the method significantly reduces the time and effort required for dataset creation, while maintaining the high-quality results necessary for training accurate deep learning models.

Methodology and Experimental Setup

The KineWheel-DeepLabCut system used in this study consists of a transparent wheel, a high-speed camera, and a custom control system that synchronizes the alternating UV and white light illumination with the camera’s global shutter. The KineWheel is placed within a box with white foam board walls to provide a neutral background and consistent lighting conditions.

To create the automatically annotated dataset, the researchers used an odorless, UV-reflective ink to mark the mouse’s paws with distinct colors: green for the right front paw, blue for the left front paw, red for the left hindpaw, and turquoise for the right hindpaw. During the data acquisition phase, the alternating UV and white light illumination allows the system to capture video frames where the paw markers are visible only in the UV-exposed frames.

A custom script is then used to filter the UV-exposed frames, identify the distinct paw markers, and deterministically map their locations to the corresponding paw positions in the adjacent white light-exposed frames. This automatically annotated dataset is then used to train a deep neural network model using the DeepLabCut framework, without the need for manual annotation.

Evaluating the Automated Approach

To assess the effectiveness of the automated annotation method, the researchers trained two separate models: one using the automatically annotated dataset and another using manually annotated data. Both models were trained using the same DeepLabCut framework and parameters, and their performance was evaluated on a common test dataset.

The results showed that the model trained on the automatically annotated dataset achieved a mean Euclidean distance of 2.6 pixels (SEM = 0.23) from the ground truth key points, while the model trained on the manually annotated dataset achieved a mean Euclidean distance of 2.7 pixels (SEM = 0.17). A statistical analysis revealed no significant difference between the two models, indicating that the automated approach can achieve annotation quality comparable to manual labeling.

Importantly, the automated annotation process, including the initial paw marking, took significantly less time than the manual annotation, reducing the effort required to create the training dataset from 150 minutes to just a few minutes.

Advantages and Limitations of the Automated Approach

The key advantage of the automated annotation method is its ability to significantly reduce the time and labor required for dataset creation, without compromising the quality of the annotations. By leveraging the marker-based approach during the data acquisition phase and then transitioning to markerless pose estimation during the inference phase, the method offers the best of both worlds.

However, it’s important to note that marker-based pose estimation still has inherent limitations, such as the potential for marker interference with natural animal behavior and the risk of markers wearing off or becoming obscured over time. While these limitations do not affect the inference phase in this study, they should be considered when planning long-term experiments or extending the method to different species or experimental setups.

Additionally, the effectiveness of the automated annotation approach may be affected by the number of required markers and the ability to reliably distinguish them based on color. In the case of this study, the four-paw setup was well-suited for the available color options, but scaling the method to track a larger number of key points may require further investigation.

Conclusion and Future Directions

The automated paw annotation method presented in this article offers a promising solution for streamlining the data preparation process in rodent pose tracking, a crucial area of neuroscience research. By significantly reducing the time and effort required for manual annotation, the method has the potential to enable researchers to analyze larger datasets and generate more robust deep learning models for pose estimation.

Moving forward, further validation of the method in a wider range of experimental settings and across different species could help to solidify its effectiveness and versatility. Exploring the integration of this automated approach with downstream tools for behavior analysis, such as BehaviorDEPOT, could also unlock new possibilities for efficient and comprehensive data processing in neuroscience studies.

As the field of neuroscience continues to evolve, the development of accurate, automated, and efficient methods for rodent pose tracking will play a vital role in advancing our understanding of the complex relationships between neural circuits, behavior, and neurological disorders. The automated annotation method presented in this article represents an important step towards realizing this goal.

Facebook
Pinterest
Twitter
LinkedIn

Newsletter

Signup our newsletter to get update information, news, insight or promotions.

Latest Post