What I Have Been Up To!
So in this post I want to talk about a new manuscript of mine. This paper is important to me because it is the end result of my jumping blindly into a new field, neuroimaging. Specifically, I tasked myself with developing a method to streamline the segmentation of the Rhesus macaque hippocampus from MRI without having to manually trace all of the hippocampus myself. One of my requirements was that I not have to sacrifice anatomical rigor in the process. I also wanted a method that did not involve prohibitive licenses and expensive software that required specialized hardware.
The new paper of mine that I am writing about in this post is “A Semi-Automated Pipeline for the Segmentation of Rhesus Macaque Hippocampus: Validation across a Wide Age Range” by myself and David Amaral. It is in PLOS ONE and is open access-thus free to download. I also have the pipeline archived on Github.
What Did I Study and Why?
My primary focus in this manuscript was to fix a problem that has long existed in neuroimaging: that of depending on research volunteers and undergraduates to manually trace or segment regions of interest for volumetric or ROI analyses. These methods take a very long time to train to reliability and natural student turnover is very problematical as a result.
My first postdoctoral position involved manually tracing hippocampi on MRI scans of rhesus macaques at 9 time points: 1,4,8,13,26,39,52,156,and 260 weeks of age. All said and done that project involved manually tracing the brain scans of 24 monkeys at each of 9 time points, for a total of 532 hippocampal tracings. With my intra-rater reliability studies, which involved tracing every brain a second time in a different orientation space (radiologic vs neurological space) and using a different program-a total of 1064 traced hippocampus.
In my spare time (read when my wrist was falling asleep and my shoulder ached), I sought to develop a method that would use my computer todo at least 90% of the work for me with minimal guidance. It is this work, my side project, that I am writing about in this post.
My Innovative Approach
I was fortunate enough to hear about the work being done in Pennsylvania on the Advanced Normalization Tools package, or ANTS. I consider myself fortunate because ANTS is open source and works very well on Mac OSX, which I was using to run ANALYZE anyway.
There were a few things from this research group in Pennsylvania that were immensely useful for me.
- None of their software have dependencies on closed source software. The only real dependencies are the Insight Toolkit (ITK) and the Visualization Toolkit (VTK).
- They had an open source tool, ITK-SNAP, to trace regions of interest and place landmarks. I used another program since ITK-SNAP crashed my computer, but this program is *very* useful if your computer plays well with java.
- The tools in ANTS were better at signal bias field correction than ANALYZE and FSL, which was extremely important for scans taken at early ages in primates.
- John Pluta had developed a protocol to use a few sparse landmarks to guide warping an already segmented hippocampus onto an experimental brainscan. The segmentation is them left on the experimental brain, providing a hippocampus tracing that requires only 6 landmarks be placed.
- Hongzhi Wang developed a machine learning algorithm that was designed to fix systematic errors in automated segmentation protocols (particularly Freesurfer).
By applying these last two tools to my primate dataset, I was able to take a single rhesus macaque brain at each age with traced hippocampus and use that as a template. This template was warped to the other 23 brains at each age using as few as 20 landmarks per brain to guide the registration (by slightly modifying Pluta’s methods). These landmarks took under 3 min to place on each brain. To overcome systematic registration errors from the automated segmentation, I used the correction algorithm from Wang to fix the segmentations. This was done by training the algorithm to fix the worst of the automated tracings using hand tracings as a reference.
Overall, I found was that I was able to do in under two weeks of computer time the work that took me > 8 months of 8 hour days to complete by hand using a prohibitively expensive software package. This time would have been further reduced (by 80-90% or so) if I had a computing cluster or a computer with more RAM (the computer I was using was an off the shelf iMac purchased in 201 with 4GB RAM and a relatively slow processor compared to the upgrades).
I am currently finishing up work using this same method to quantify region of interest volume in brain scans from children with developmental disorders. These methods have allowed me to streamline my neuroanatomical analysis, without sacrificing neuroanatomical rigor. This ability to combine automated methods and anatomical rigor has been difficult in neuroimaging, but my experience suggests the tools are already out there, we just have to pick them up and start using them!
Manually tracing hippocampus is a hard, thankless work that takes a long time to learn how to do. Also, as humans we are highly susceptible to perceptual bias that affects our data (we tend to trace things on our right as larger than same thing on left). If we leverage computational tools to do the work for us, the resulting errors are easy to correct using machine learning.
I hope anyone who reads this post and works with brain region segmentation will adopt these methods and streamline their process. It is easier, requires fewer student trainees, and results in more reliable, valid, neuroanatomical data.