Exploring AR as a potential way to help healthcare professionals perform more insight-rich deep vein thrombosis (DVT) scans

Currently, there are limited ways for healthcare professionals to learn and practice point-of-care (POCUS) ultrasound. Although it is a life saving tool, access is extremely limited, negatively impacting outcomes for millions of patients. It was my job to understand how VR could be best used to enrich this learning experience,. In turn, helping bring this tool, as well as ThinkSono, to mass market. 


ThinkSono works to improve the Ultrasound scanning process and experience. Through AutoDVT they are able to significantly improve and speed up the DVT clinical pathway. It allows Doctors to cut down the steps/hours when screening for DVT. This minimizes the time between seeing a patient to obtaining a final report.

They decided they wanted to explore and create a 3D/AR extension for the app in order to capture more angles of scanning a patient in 3D. This would further speed up the screening process and increase teaching capabilities. We would achieve this by recreating – in 3D – the current DVT video steps, as well as capture difficult-to-see camera angles and renders, to be integrated into the app.

Throughout the app, real-life video recordings and screenshots show users how to perform the scan, along with other useful tips and tricks. Yet in 3D, doctors can now view new difficult-to-see camera angles.
Furthermore, by transferring the scenes into AR, immersive teaching could occur in terms of: greater exploration of the model + visualization of patients in the real world. 

Getting an initial sense of the ‘art of the possible

To kickstart the sprint, we took notes and framed up what ThinkSono currently marketed itself as. For some context, ThinkSono’s app uses AI-Guidance, alongside a hand-held Clarius Probe and smartphone, for point of care scanning. I made it clear that I wanted to know what we had to work with, and how far we could take the project in terms of resource and scope. 


The team were pretty clear that they “wanted better” – but we also agreed to work towards a clearly defined MVP.  This would entail rebuilding the current scanning journey in 3D, and getting a sense of how we could incorporate AR as the sprint went on.

I then started modelling key components that were standard and could be designed in conjunction with our thinking behind the ‘early thinking’ itself. Having modeled a Hospital Bed, Clarius Probe and Ultrasound Gel bottle, I downloaded a basic rigged ‘Patient’ character to animate/render. Using Maya’s nParticles, I created and animated a realistic ‘Gel’ effect that dispenses from the bottle.

Maya's nparticles created and animated for 'Gel' liquid

Quick iterations kept the project on course

As I modeled, animated and rendered, I gathered feedback from the Technical Co-Founder. We made sure that angles+positions of the Clarius Probe, Patient position,Hand and Render Cameras were all correct and followed the steps from the video. Collaboration was key as anything missed would be a detriment to the doctors.

Feedback from random people at my co-working space

I also obtained feedback on the animation sequences from many people at my co-working space. I asked if they understood what was happening, what could be improved, if there was anything difficult to understand etc. Through careful framing and selection of questions, I was able to successfully put them in the shoes of healthcare professionals (unfortunately feedback from users was out of scope).

This was fed back to the rest of the team, in which we used to improve parts of the project. Some of the main feedback were that there wasn’t enough sensory engagement with the current animations. People wanted to engage audibly as much as visually, and so questions of accessibility were starting to crop up in discussions. Generally though, feedback centered around wrongly identifying key objects due to lack of detail and realism.

Improvements

Based on the feedback, the technical co-founder wanted more ‘visual professional’ renders. So we chose to incorporate:

  • A new, downloaded realistic patient character
  • A new hospital bed

As well as more detail through a:

  • Lattice Deformer that was used to replicate skin being ‘deformed/squished’ by the probe.

This would mean that we could retain the realism of a physical person, and – at the same time – provide the new, harder-to-reach angles. If we weren’t able to capture things like the deformation of skin by the probe, then the models would lack integrity and misinformed decisions could be made by the healthcare professional.

Collaboration smoothed out the pipeline

I worked on these changes by collaborating with the team’s UX Designer on the user-facing application using Figma. Using sticky-notes, we outlined and identified the most important steps, angles, and renders to use within the app. These would then be added into his workstream, where he was focusing on the interface and experience aspects of the AutoDVT App.

I also started work on adding ‘highlights’ to key areas of the rendered scene which would identify important aspects for the user. Premier Pro was used along with mask paths.

Premier Pro mask highlighting character
Final 3D scenes with new improvements

How we incorporated AR

The team wanted to incorpate the immersive aspect of AR, but they simply didnt know how it worked. I was able to explain the basic processes involved and put forward to possible solutions: an up close and personal scene of the knee for detailed learning, or a full view of the model for more autonomous examination.

Went with latter as the team were really banking on immersion rather than specificity. After aligning with the technical co-founder, I imported the scene into Unity where I corrected the lighting/shaders.
Using Vuforia AR SDK, I applied the Lean Touch Input Management from the Unity Asset Store. This allowed me to rotate and scale the 3D Model in AR via my smartphone.

Unity Project
AR project

The End Result

In terms of outputs, we reproduced the current deep vein thrombosis (DVT) scan in 3D, and took initial steps do the same in AR. Overall, we believed the outputs provided enough detail and immersion to aid healthcare professionals in their examining, learning and teaching capabilities. However, before I could make the suggestion to start testing with real users, next steps were cut short.

The company have been trialing their app in many hospitals worldwide.My 3D stills and video renders would be added in for new, upcoming clinical trials.

There was no longer a solid business case for my role.
Positive feedback given by the team: the renders and animations worked nicely and looked professional for the project in mind. However the Co-Founder said that, as a startup, they didn’t see any financial benefit to keeping on the role of AR developer. This was a role they were looking to explore initially. But as time went on, they decided it would not help the company financially, and so put a greater focus on fleshing out their pricing system.

Final render scene, using old character + bed

What I learned

Achieving new skills/techniques through the teams desire for ‘realism’.
Realism was the chief aim given to me by the team. Although challenging, learning about nParticles and Lattice Deformers  in Maya to achieve this was very interesting. With lots of trial and error I discovered and created new 3D techniques. The results were great, leading to impressed team-members.

Successfully working with other team members.
By collaborating with the UX designer, we were able to discuss and pick out key and significant steps of a DVT scan to use within the app. This involved thinking from the viewpoint of the user, which was something I hadn’t had much experience of previously. Also, I worked with the team’s Engineer to setup a Virtual Machine via Amazon-Web-Services. Providing me with stronger CPU/GPU’s, and faster render times/ quicker work outputs.

Wearing multiple hats for a more complete skillset.
Due to a heavy output, the UX designer assigned me the task of editing and stitching the scenes together. This involved learning Adobe Premier’s editing techniques, including importing and stitching 3D Renders together to form videos, Aspect Ratio’s for the App, and Mask Highlights for areas of the video.