Will Apple’s Vision Pro Give Medical VR Some Legs to Stand On?

Apple’s Vision Pro ‘Spatial Computer’ and companion VisionOS give us much to be hopeful about.  With this WWDC sneak peek – what does the Apple Vision Pro mean for healthcare?

Level Ex is the flagship developer of medical content that showcases what’s possible on latest consumer devices. We’ve built the first clinician-focused medical content on Magic Leap 2, Oculus, and HTC Vive –- as well as launch content for Apple’s and Google’s mobile AR technologies: ARkit and ARCore.

Here is our take:

The Good:

Critical Mass is…Critical

With over 3M downloads, Level Ex has had 1M+ medical professionals play our games on mobile. Over 70,000 clinicians have played our mobile medical AR games. Roughly 7,000 have played our VR and AR headset experiences.

What’s the lesson here? In 2023, embracing VR/AR headsets still means sacrificing reach by two orders-of-magnitude. Apple’s entry into the market could change that trajectory in 2024-2025.

If anyone knows how to reinvigorate a consumer base and drive mainstream adoption of a plateauing modality (smartphones, tablets, etc.) it’s Apple. If Vision Pro (and its follow-on products) form a meaningful percentage of Apple’s next billion devices in-market, headsets would finally become pervasive. Doctors are consumers too. Once the majority of medical professionals have personal headsets, earning CME while practicing surgery or virtual patient interactions will finally become commonplace. 

This’ll Fit Right In

Vision Pro features a number of capabilities that we know from experience (on other devices) are a benefit to healthcare training applications. Vision Pro’s support for Apple’s existing ARkit API means cross-compatibility with iPhone for AR content –- and support for reliable, medical-grade augmented reality. ARkit was the first mainstream AR platform to capture the surrounding lighting environment and provide it as input to the application, – allowing us to place virtual patients into the real world in ways that matter: What does it look like to try to intubate a patient on the floor of a dark room in an emergency scenario? Turn the lights off and find out.

Intubating a virtual patient using ARkit in Level Ex's Airway Ex app in a minigame sponsored by Medtronic. ARkit will be fully supported on Vision Pro.

Intubating a virtual patient using ARkit in Level Ex’s Airway Ex app in a minigame sponsored by Medtronic. ARkit will be fully supported on Vision Pro.

We’ve learned from experience with Magic Leap 2 about the benefits of ‘local occlusion’ in augmented reality applications.

Local Occlusion on Magic Leap 2 - Also possible on Vision Pro

Local Occlusion on Magic Leap 2 – Also possible on Vision Pro

Local occlusion means that you can project a virtual object into the real world, and that object will appear opaque.  While Vision Pro takes a different approach to achieve this effect  ( VP does it in software while ML2 does it using optical semiconductor magic) – – it’s the result that matters. Previous-gen AR was limited in that everything it projected into the real world was semi-transparent. In medical scenarios, tools are occluded. Your hand may be occluded. Recreating this occlusion is important for training scenarios: human and otherwise. Computer vision applications in the O.R. are often confounded by this occlusion.

Privacy is a Big Deal in Healthcare

Privacy is a big deal in healthcare, and Apple’s focus on privacy as a differentiator is a meaningful selling proposition.

If you’re concerned that looking at someone’s eyes through this thing is weird, just remember that it’s less awkward than talking to your doctor while he looks down at his computer screen to type into the EHR during your entire visit.

Apple’s App Universe is More Compelling Than Meta’s Lame Metaverse

Metaverse characters walking in street

I’ll take 100,000 iOS apps over metaverse hype any day.

I’ve commented many times about how Meta’s Metaverse vision (and implementation) lacks any real enduring value.

Who cares about buying pricey virtual Nike shoes and artificial real estate for your cheesy digital twin when there’s nothing compelling to do with it?

This is where Apple provides an especially stark contrast. Apple’s bringing its existing ecosystem to the table, and the value is palpable: Facetime. Apple Arcade. Media. Safari. All with meaningful advantages on the platform. Plus easy portability from other platforms for developers.

Resolution is Key

We’ve created several VR and AR medical games that feature medical devices running software inside the game. Making UI text readable (and usable) on headsets can be a real challenge. Apple seems to be thinking about this the right way – being very aggressive about pixel density (64X denser than iPhone) and offering features like foveated rendering.

Fingers crossed, however… because Apple has yet to reveal how these pixels are actually spread out by the optics. Hence…

The Disappointing

What Apple (Deliberately?) Withheld

Apple conspicuously left out the two most important tech specs in their reveal: What is the field of view and how much does it weigh?

Mass and volume. The basics.

There are anecdotes of executives saying it’s a ‘limitless’ field of view but that doesn’t…compute.  My eyeballs don’t have a ‘limitless’ field of view inside my eyelids. This is a key stat that wasn’t shared, which triggers some skepticism.. A narrow field-of-view limits usability in general. In healthcare, it limits the ability to recreate complex procedural scenarios where the challenge lies in operating in your 3-dimensional environment: positioning yourself correctly relative to the patient and the surgical field, maintaining situational awareness across the various monitors, and coordinating with other people in the room.

Apple Conference VR headset

As for the weight – they didn’t provide that either. WWDC developers could see the devices on a table but weren’t allowed to touch them… lest the forces imbued reveal the device’s top-secret weight. If this thing is anything less than featherweight it’s a barrier to broad adoption. It’s still a boon for medical training, which often takes place in 10-60 minute intervals. 

6/8 Update: Early-access journalists who tried the Vision Pro reported discomfort after usage due to the weight. 

In the history of Apple’s “keynote reveals” – I don’t think there’s ever been a device announcement that didn’t include how much the device weighs on Earth.

Some Disappointing “Firsts”

In addition to the “Mass Mystery”,  the Vision Pro marked a number of ‘firsts’ for Apple’s keynote announcements – none of them good. Vision Pro marks the first Apple product that I can remember where:

  1. The presenter wasn’t wearing/holding/using the device. Seriously – not a single presenter was shown wearing it.
  2. The device had a two-hour battery life. You can’t even watch Avatar on the external battery.
  3. The product isn’t going to be available within 6 months. Typically Apple’s June/September announcements are for products that are imminently available for back-to-school or holiday. Not this time.

And that head strap – as comfortable as it looks – doesn’t appear especially sterilization-friendly, making us nervous about potential approvals for use in medical and surgical practice.

Luckily, the strap isn’t so much an issue for medical training, which stands to benefit substantially from Apple’s remarkable entrance into the AR/VR market.

In Conclusion

As an Apple user and developer, I want a Vision Pro. I’ve been eyeing to purchase a personal set of AR glasses to give myself a virtual widescreen monitor on airplanes (I spend a lot of time on airplanes), and this offers so much more.

As more is revealed about Vision Pro, let’s hope Apple will give medical AR/VR some solid legs to stand on in 2024. 


This article was written by Level Ex CEO Sam Glassenberg and originally featured on LinkedIn

Read original article.
Follow Sam on LinkedIn
to learn more about advancing medicine through videogame technology and design

The Impact of Generative AI on the Video Games Industry (and Beyond)

Is ChatGPT going to replace all of us? 😱 “If you look at the way we’ve been doing games for the last 30 years, every 5 years you’re developing for a platform that delivers an order of magnitude more content.” Level Ex CEO Sam Glassenberg and Jason VandenBerghe, our Game Design Director, discussing the forthcoming impact of generative AI on the video games industry (and beyond). Sam has been documenting his exploration of ChatGPT, so check out his recent article about the chatbot’s inner workings to learn more: https://bit.ly/41wKxu9 #levelex #medicalgames #AI #generativeAI #videogames #gamedev #technology

Ultrasound Part 2: Why Self-directed, Just-in-Time Training is So Important 

By Eric Gantwerker, MD, MMSc(MedEd), FACS (Level Ex VP, Medical Director)

We have witnessed a huge shift in knowledge access over the previous several decades. What was formerly exclusively available in libraries and professors’ brains is now available on the devices that everyone has in their pockets at all times. As a result of the digital age, knowledge shortfalls can now be corrected promptly with Just-in-Time access to content and facts. However, there is still a need for learners to acquire foundational knowledge and conceptual understanding in order to integrate new information into their knowledge base.  

A Post-Pandemic Education Landscape 

COVID-19 undoubtedly changed the landscape of education at every level, as schools scrambled to create online learning platforms to address the massive in-person learning losses. This was no different in healthcare education where training was heavily predicated on in-person experiential learning opportunities. Even simulation suffered during the pandemic, as centers were closed and access to hardware-based learning resources grinded to a halt. 

Through the pandemic’s forced closures and cancellations, we realized the higher cost, lower access, and chance that exists with in-person experiential learning. Higher expense and time investment for clinicians to travel and participate in in-person opportunities, less access for those with financial or geographical limitations, and the reliance on chance dictating the learning opportunities for patient-based training. 

As the new normal sets in post-pandemic, educational leaders are analyzing the ways that asynchronous and synchronous, remote learning opportunities can transform education by replacing and augmenting in-person, synchronous learning experiences. This means taking advantage of the Just-in-Time learning, lower cost, and higher touchpoints afforded by mobile and software-based solutions to impart knowledge and sharpen skills.

The future is hybrid learning for both medical and surgical education. We have realized the learning curve for any cognitive or psychomotor skill is most efficiently learned through a combination of asynchronous, synchronous remote components, and synchronous, in-person elements. If we can get learners higher on the learning curve prior to the high cost, low access touchpoints, we maximize the efficiency of learning. This asynchronous learning should be available when clinicians need it, on the device they already have. 

Asynchronous learning easily accessible for clinicians with a device they already have.

The Value of Self-Regulated Learning  

In education, we discuss the concept of self-regulated learning (SRL), which refers to how highly-motivated learners take command of their own learning and go through to identify their own knowledge and skill gaps, actively address them, receive feedback, and continue along as they address learning deficits.

In general, highly motivated and astute learners, such as astronauts, just need access to materials—without oversight or faculty—to address those deficits. As astronauts train for missions, their training schedule is jam-packed, and only a small fraction of that time is devoted to learning about medical emergencies and procedures. 

Typically for near space missions, crews have access to flight surgeons on the ground to guide them through any medical scenario—but what happens when they are on a deep space mission to Mars where the communication delay is 20 minutes each way? The emergency may have already played itself out by the time communication to the ground and back has happened. So astronauts need Just-in-Time, efficient mechanisms to quickly train on how to evaluate and treat these emergencies, even if it’s just a refresher for them. This includes cognitive (what is the diagnosis?), as well as psychomotor tasks, (how do I perform an ultrasound, and what am I looking at?).  

To meet this need, Level Ex developed a virtual training platform for space crews centered on this Just-in-Time training approach. Building on our prior work and in collaboration with the Translational Research Institute for Space Health (TRISH) and KBR, we built a solution for the upcoming Polaris Dawn mission consisting of two parts—both aimed at enabling astronauts to better monitor their health and maximize their safety in space. 
Our pre-flight orientation and training guide teaches the crew how to use a handheld Butterfly iQ+ device for ultrasound imaging that will be onboard the spacecraft. During their 5-day orbit mission, the crew will use Just-in-Time training and procedural guidance that Level Ex created to perform the ultrasound procedures on themselves and collect data. The crew will be tracking their blood flow patterns daily to learn more about how the zero gravity environment influences the human body. This experiment will also test the efficacy of using virtual training solutions like video games for Just-in-Time training on medical technology and procedures.

Polaris Dawn crew members practice using a handheld Butterfly iQ+ device for ultrasound imaging. The Polaris Dawn mission is slated to launch in 2023.  

Polaris Dawn crew members practice using a handheld Butterfly iQ+ device for ultrasound imaging. The Polaris Dawn mission is slated to launch in 2023.   

Training the Mind, Without the Medical Device    

Many may wonder how they’re going to learn a psychomotor/technical task without a specific medical device in their hand. The answer depends on whether they are a novice or expert in that task. If they are a novice, the first parts of any procedure are knowing: 

  • The context (where am I, and what I am trying to do?)
  • The specific parameters of the equipment they are using (what does this button do?)
  • The steps of the procedure
  • How to analyze what they see 
  • How to physically perform the task  

Many of these are actually cognitive in nature, meaning the learner doesn’t need the actual medical device in hand. Oftentimes, not having the medical device in hand actually optimizes the cognitive load so they can focus on the elements without fidgeting with the device itself. 

Too often, however, this part is done through passive didactics and endless reading of manuals and documents. But there is a better way. Having a meaningful, interactive experience on their own device can create this opportunity to learn. This is not to say they never need to train with the specific medical device, but if they already have a strong understanding when they do have the device in hand, they can focus on enacting the strategy they have already created through countless cycles of trial and error beforehand. 

Even force feedback, the simulation of real-world physical touch, has a significant visual component that your brain processes more than tactile feedback. For example, think of a video of a rubberband around a watermelon about to burst, the viewer perceives there is force without actually touching the watermelon.  

Expert learners can also do a fair amount of training on the medical device before having it in-hand. Again, one needs to orient to what is different from prior experience and understand the strategy and approach. This is why “how I do it” videos are so popular among surgeons, because they can simply watch the video and then go to the bedside and enact that strategy. Their expert eyes easily see changes in patterns and integrate them into their knowledge base. They will need the physical device at some point, but by that time, they will have already run through the procedure in their head hundreds of times. 

Regardless of their experience level, training beforehand on their own time, with their own device is a much lower cost and higher frequency touchpoint as compared with any in-person lab or cadaver workshop. If the learner is well-trained on the cognitive components of the procedure, they can focus all their attention on actually holding the device and the mechanical and technical aspects of the procedure. This will limit the time needed to learn in-person, reduce the watching of boring lectures and reading technical guides, and drill into them what they need to know, when they need it. 

Such self-directed, Just-in-Time training maximizes efficiency of learning any new device or technique, equaling better quality of learning at lower cost. It’s a win-win.   


Interested in learning more about Level Ex’s technology and how it’s accelerating medical device training and adoption? Contact us.

Forget ChatGPT4 – I’m Preparing My Prompts for GPT7

The mindbogglingly rapid evolution of OpenAI’s ChatGPT is making it hard to keep up with its quickly-expanding arsenal of capabilities. So let’s just jump ahead…. Here are the prompts I’m preparing, assuming certain features will be available in ChatGPT6 or 7:

  • Please create the next season of the Expanse. More Drummer, please. Let’s make it 8 episodes? 
  • Please create a video of the “Bear Jew” backstory scene from Inglorious Basterds that was written in the script but never filmed: The one in 1940 where Eli Roth’s character is going house-to-house visiting Jewish refugees in Brookline so they can carve the names of their European relatives into his bat: the ones who didn’t make it out of Europe in time. 
ChatGPT render of old woman with baseball bat on sofa
  • Make a VR experience of “boardwalk empire” – I’d like to be able to speak to the major characters, acted by the actors in the series. Fill in any gaps using their real-life stories and personalities. 
ChatGPT render of man in suit with futuristic tech glasses
  • Create 2 movie versions of the graphic novel, Maus: One in the visual style of the graphic novels, and one in the style of a Pixar film. 
  • Please create a movie version of “Project Hail Mary” by Andy Weir. I can’t wait for the official movie to come out in 2025. Feature Ryan Reynolds as the protagonist instead of Ryan Gosling. Use “The Martian” film for reference – they did a good job adapting the book on that one. 
  • Generate a full CFD simulation of a Saturn 5 rocket launch. I’d like to explore it in VR, in 100x slow motion, with the option to clip anywhere to see cutaway views of the hardware, engines, and conduits. 
  • Make the following episodes of “epic rap battles of history”:
  1. Greta Thunberg vs. Captain Planet
  2. Dr. Oz vs. the Wizard of Oz
  • Please create a movie rendition of the Hanukkah story in the style of Zack Snyder’s 300. 80 minutes long. 
300 Chat GPT Zack Snyder style illustration

This article was written by Level Ex CEO Sam Glassenberg and originally featured on LinkedIn

Read original article.
Follow Sam on LinkedIn
to learn more about advancing medicine through videogame technology and design

Marketing to ChatGPT: The Next Evolution of SEO?

Have you tried asking ChatGPT about yourself or your company – by name?

If you are Arnold Schwarzenegger, or if you work for Microsoft, chances are you’ll get a result that is spot-on. 

If you’re only moderately famous or work for a small company, chances are you’ll get a result that is “sort of” right, with a few errors (maybe they’ll get your company’s founding year wrong, or where you earned your college degree).

….Or ChatGPT might know nothing about you. She might simply make stuff up (she’s really good at that). 

If you run a small restaurant in Champaign, Illinois  – how do you get ChatGPT to offer your establishment as a recommendation when someone asks “Where’s the best place to eat lasagna near UIUC?”

Or if you’re Kellogg – how do you get ChatGPT to respond to a question about breakfast cereal with “Frosted Flakes? They’re grrrrreat!” 

AIO: The Next Incarnation of SEO

For the past two decades, the evolving discipline of SEO has fought this battle with search engines.

In 2023, a fundamental shift will take place. A new discipline will emerge… Artificial Intelligence Optimization: How to optimize content to increase prominence in the training datasets for AI agents like ChatGPT? 

More specifically:

  • How do you place your content so it’s more likely to be included in the training data?
  • How do you structure your content so it will have the biggest impact on the training algorithm? 
  • How do you write your content to maximize positive associations between your brand/name and the concepts (not keywords) in the artificial neural network?
  • How do you create your content to maximize the likelihood of attention mechanisms focusing on your keywords in the input sequence?

Measurement and Testing

SEO success is measured as follows: in the list of search results for keyword searches, how prominent does your content appear?

AIO success is measured differently. Of course, you can ask subsequently-trained models certain questions and make sure that the answer reflects what you want it to – but there are more objective approaches. You need simply look at the word embeddings for your brand/name to see what other concepts it is associated with. These word embeddings are simply vectors – a list of numbers, each one representing the strength of the association with your brand/name and other concepts. The higher the association value with positive concepts, and the lower the association value with negative concepts, the better you are doing.

Because many models are open-sourced, marketing/AIO agencies will be able to do massively parallel test-runs on various models to see how changes to content affect how the content impacts the neural net. This will actually be fairly easy to implement and not especially computationally intensive, even though these artificial neural networks are usually a ‘black box’. All you need to look at is changes to the weights (referred to as the model’s parameters), and see what the impact is on word embeddings to determine if you’re getting the results you want. 

Best Practices

The rest of this article is made up of the most useful responses by ChatGPT, in response to a barrage of questions on the subject

Where to Place Your Content To Maximize Inclusion in Training Datasets

ChatGPT’s training data is sourced from a diverse range of publicly available text found on the internet. There isn’t any official guidance on specific locations to place your content for inclusion in ChatGPT’s training data. However, by considering some general practices, you can increase the likelihood of your content being included:

  1. High-quality content platforms: Publish your content on popular and authoritative websites, blogs, or platforms related to your niche or industry. Content from reputable sources is more likely to be considered during the data collection process.
  2. Collaborate with influencers or experts: Partner with influencers or experts in your field to increase your content’s visibility and credibility. Content from recognizable figures or industry experts may be more likely to be included in training data.
  3. Engage with online communities: Share your content on relevant online communities, forums, or discussion boards, such as Reddit or Quora. These platforms often contain diverse and engaging content, which can make them valuable sources for AI training data.
  4. Social media sharing: Post your content on social media platforms like Twitter, LinkedIn, or Facebook. High-quality content that generates significant engagement may have a higher likelihood of being included in the training dataset.
  5. Open-access repositories: Publish your content in open-access repositories, journals, or platforms that are available to the public and researchers. This type of content is more likely to be accessible and used as a data source for AI models.
  6. Search engine optimization (SEO): Optimize your content for search engines to increase its visibility and discoverability by AI model developers or web crawlers.

While these guidelines don’t guarantee your content’s inclusion in ChatGPT’s training data, they can improve the chances of your content being considered

AI Optimization Strategies: How to make your content AI-friendly

Optimizing content to maximize the emergence of patterns learned by AI language models like ChatGPT involves making your content more understandable, relevant, and extractable for the model. To achieve this, tailor your content to align with the patterns the AI model has learned. Here’s a concrete example to illustrate this process:

Suppose you want to optimize content about “The Benefits of Yoga” to be prominently featured in ChatGPT’s responses.

1) Clear and concise language: Write an introduction using simple language and avoid jargon.

Example: “Yoga is an ancient practice with numerous physical and mental health benefits. It combines physical postures, breathing techniques, and meditation to promote overall well-being.”

2) Logical organization: Use clear headings and subheadings to divide the content into sections.

Example:

3) Keyword usage: Incorporate relevant keywords and phrases naturally throughout the content.

Example: “Yoga helps improve flexibility, strength, and balance while also reducing stress, enhancing focus, and promoting better sleep.”

4) Semantic relationships: Use synonyms and related phrases to reinforce the topic.

Example: “The advantages of practicing yoga extend beyond mere physical exercise, as it also fosters mental clarity and emotional resilience.”

5) Authoritativeness: Cite credible sources, such as scientific studies or expert opinions, to support your claims.

Example: “According to a study published in the International Journal of Yoga, regular yoga practice can significantly reduce stress levels and improve overall mental health.”

6) Recency and updates: Use the latest information and update your content regularly.

Example: “A recent 2023 study found that yoga may also help boost immune function, adding to the growing list of benefits.”

7) Answer common questions: Address frequently asked questions related to the topic.

Example:

  • “What are the different types of yoga?”
  • “How often should I practice yoga to see benefits?”
  • “Can beginners practice yoga?”

By applying these principles and using the example as a guide, you can create content that aligns with patterns learned by ChatGPT and increase the likelihood of your content being featured in the AI model’s responses. Remember that AI models are constantly evolving, so staying up-to-date with the latest developments and adjusting your strategies accordingly is crucial.

The Subtle Differences Between SEO and AIO

As AI continues to be the life of the digital marketing party, it’s time to embrace AIO and optimize your content for AI-driven platforms. But what sets AIO apart from our old friend SEO? Let’s break it down with some concrete examples:

SEO:

  • Focuses on optimizing content for search engine visibility
  • Targets specific keywords to rank higher in search results
  • Uses backlinks, metadata, and on-page optimization to improve rankings

Example: In our travel blog post about “Top 10 Must-Visit Destinations in 2023,” we would optimize the content for search engines by using a target keyword like “best travel destinations 2023,” optimizing meta tags, and obtaining backlinks from reputable travel websites.

AIO:

  • Focuses on optimizing content for AI model comprehension and reference
  • Prioritizes clear language, logical organization, and semantic relationships
  • Aims to ensure content is included in AI model training datasets and properly understood by AI platforms

Example: For the same travel blog post, AIO optimization would involve using clear language, breaking the list into subheadings, playing with synonyms, citing credible sources, and addressing common questions to ensure AI models like ChatGPT understand and reference the content accurately.

Conclusion

In a nutshell, SEO is all about pleasing search engines, while AIO is about becoming besties with AI models. As the digital landscape evolves, it’s essential to strike a balance between these two strategies to stay ahead of the game and make your content shine in both search results and AI-driven platforms. So, put on your AIO hat and get ready to make your content the life of the AI party!


This article was written by Level Ex CEO Sam Glassenberg and originally featured on LinkedIn

Read original article.
Follow Sam on LinkedIn
to learn more about advancing medicine through videogame technology and design

Subsurface Scattering: The Secret to Relevant Medical Imagery is Below the Surface

You’re more transparent than you think.  

You are holding your hand up to a flashlight. As the light shines onto your hand, some of it enters the skin and bounces around inside. This causes the light to scatter in all directions, and escape at points that are a distance away from where they entered, creating a soft, warm glow that appears to come from within the skin.

An example of sub-surfacing scattering is when holding a flashlight to your hand the light shines, some enters the skin and bounces around inside - causing it to scatter in all directions and escape at points that are away from where they entered.

This effect is called “subsurface scattering,” and simulating this complex phenomenon is crucial when recreating organic materials. The flashlight demonstrates this effect in the extreme—but in its more subtle day-to-day form, subsurface scattering is a critical element in film, video games, and medical training. Subsurface scattering (SSS) serves as a bridge over the “uncanny valley,” and is often a major differentiating factor between “cheap computer-generated images (CGI)” and “the good stuff.” 

Why does subsurface scattering matter to dermatological diagnosis?

In dermatology and other medical fields, the effects of subsurface scattering are crucial to the diagnosis of skin disease, and to monitoring its severity over time. A great deal of aesthetic dermatology is also focused on modifying the behavior of skin below the surface. Diseases and treatments that impact skin layers below the surface will affect the subsurface scattering of light, creating subtle, visual clues recognized by the trained dermatologist. Diseases may impact blood flow, or layers below or above pigment that might only be revealed under certain lighting conditions. Because diseases may cause changes below or above pigment layers, subsurface scattering has a great deal of impact on how diseases reveal themselves on different skin tones.

AAD Blog derm slideshow

Top Derm, Level Ex’s interactive experience for dermatologists

Top Derm, Level Ex’s interactive experience for dermatologists

Top Derm, Level Ex’s interactive experience for dermatologists

Images from Top Derm, Level Ex’s interactive experience for dermatologists.

Where it started: The evolution of realistic video game graphics

Subsurface scattering in early CGI

The technology behind the first CGI films in the 1990’s couldn’t achieve this effect, which is why characters in those films look somehow “wrong” or “artificial.” People describe these characters as looking like they are “made of plastic,” which makes sense, as cheap plastics don’t exhibit subsurface scattering.

“Andy” in the original Toy Story and in Toy Story 4. Photo: Pixar Animation Studios

“Andy” in the original Toy Story and in Toy Story 4. Photo: Pixar Animation Studios

I wrote my first subsurface scattering “shader” program in 1999 at the University of Illinois, simulating the behavior of light through each layer of skin: Different skin layers absorb and scatter light at different frequencies based on factors like pigment and blood flow. I included this in my application when I applied for my first job at LucasFilm.

In film, we have hours to calculate all of the light physics for each frame, but in video games we must do it at 60 frames-per-second. At Microsoft, my DirectX team invested a great deal of effort into making real-time SSS possible for in-game characters, creating technologies such as “precomputed radiance transfer (PRT),” a crazy mathematical “hack” that can calculate much of the costly SSS light simulations before a game is run. 

Various DirectX precomputed radiance transfer examples provided to game developers.

Various DirectX precomputed radiance transfer examples provided to game developers. 

Mid-2000s: The video games industry’s race to make more realistic visuals 

The first video games to achieve SSS in real-time were released in the mid-2000’s to great fanfare. The race was on among game developers to adopt these incredibly complex techniques. I discussed this subject during a presentation in 2006 when I took on the daunting task of explaining precomputed radiance transfer to artists. It was the highest-rated talk at Microsoft’s game developer event that year.

SSS Off (Left) vs On (Right) on a video game character.

SSS Off (Left) vs On (Right) on a video game character.

How it’s going: Advancing clinical education with hyper-realistic medical graphics at Level Ex

At Level Ex, we invest heavily in subsurface scattering research—in creating our own custom, shaders, and algorithms that run on mobile devices, cloud simulators, and VR/AR devices. 

SSS Off (Left) vs On (Right) on a virtual patient with skin of color.

SSS Off (Left) vs On (Right) on a virtual patient with skin of color.

A cube made of virtual skin, one of many tools used to test iterations of Level Ex’s skin rendering technology.

A cube made of virtual skin, one of many tools used to test iterations of Level Ex’s skin rendering technology.

Although skin is the most prominent example of subsurface scattering, this phenomenon is important in many other therapeutic areas. For example, in Gastro Ex, our interactive experience for gastroenterologists and colorectal surgeons, the tissue glows as it is stretched toward the bright light of the colonoscope:

An image from Gastro Ex, our interactive experience for gastroenterologists and colorectal surgeons.

An image from Gastro Ex, our interactive experience for gastroenterologists and colorectal surgeons.

The Bottom Line

Without accurately simulating subsurface scattering, medical imagery risks not only being inaccurate, but also potentially misleading. Accurate simulation helps doctors develop crucial mental models to intuitively understand how a disease (especially skin disease) manifests in the many layers of tissue.

Luckily, strong consumer demand for more realistic characters in video games and film has unlocked massive investment in this problem. The resulting solutions yield results that are indiscernible from reality—solutions that the healthcare industry can leverage to advance the practice of medicine.

Interested in learning more about Level Ex’s technology and how it’s advancing clinicians’ knowledge and improving their medical skills? Contact us.

Rare Diseases Q&A: How to Better Spot Zebras in the Herd

Rare Diseases Q&A: How to better spot zebras in the herd

Rare diseases affect more than 25 million patients in the U.S., yet research shows it takes between one and five years before many patients receive an accurate diagnosis. We sat down with Adam Cifu, MD, a general internist and Professor of Medicine at the University of Chicago, to discuss why rare diseases often fall under the radar, what diagnostic challenges clinicians and patients face, and what changes could improve detection. 

Q. What common challenges do clinicians encounter when it comes to rare diseases? 

A. The first barrier is that we as physicians are taught in our diagnostics to think about epidemiology first. In your daily practice, you’re going to be seeing common things over and over again. Those are the things that come to mind. If you’re thinking about rare diseases everyday, you’re probably not practicing efficiently because you’re getting caught up in things you are so unlikely to find. And with rare diseases by definition, even if you’ve read about them in medical school or residency, because you’re seeing them so infrequently, you’re going to forget about them. They are not going to be easily accessible in your memory. Those challenges make it very difficult for a patient who is coming in with a rare disease, even if they are presenting with symptoms that are classic for a rare disease. The physician is going to take more time, need to think about who they need to call upon for help, and probably need to work a little harder to come to the correct diagnosis.

Adam Cifu, MD, General Internist and Professor of Medicine of University of Chicago

Q. While clinicians can’t go into every patient case thinking it’s a rare disease, how do we train doctors to consider rare disease more often? What should trigger that mindset? 

A. Students in their first clinical year, what’s exciting about them is that they know an impressive array of diseases. But they haven’t ranked those diseases in their minds. We sort of teach them the hypothetico-deductive method thinking first, and the thinking, pattern recognition comes later, with experience. But once you’re more experienced, it’s unlikely you are going to make one of these difficult diagnoses of a rare disease the first time you meet a patient. More likely, the person is going to come in and their symptoms are going to fit something you see commonly, and you’re going to go after that first. Where we need to do a better job training physicians is when the patient returns and is not getting better. It should really be on visit two, that the doctor is not doubling down on the three diagnoses they thought the first time. That’s where I think we can do a lot better in thinking about and considering rare diseases.  

Q. With all of the barriers and challenges, how long does it generally take a rare disease patient to get the correct diagnosis?

A. It depends on which disease, and it depends where the patient is seen the first time. Unfortunately, it also depends on who the patient is. We recognize there are a lot of disparities in healthcare and so people with the same diagnosis—women, minorities—tend to take longer for the correct diagnosis to be made. There are also practices that are set up for volume to get patients in and out quickly, such as clinics primarily taking care of healthier people. Patients with rare disease are better served with practices that have more expertise and more lenient scheduling that allow doctors and patients to dive more into symptoms.

Level Ex has a dedicated game mechanic for rare disease, honed and tested across multiple therapeutic areas. In Cardio Ex for example, players narrow down diseases through reductive reasoning. Our diagnosis game mechanic is deeply integrated into the game’s outer loops, in which a player’s success in playing diagnosis levels earns achievements, experience points, and in-game progress.

Q. Share some rare disease cases you have experienced during your career. How were the diagnoses made?

A. I’ll start with a patient who I saw fairly recently who came in with fatigue, a classic symptom of so many sick patients coming to the doctor. When I dove into his fatigue however, it wasn’t the kind of fatigue that most people complain of. This man was talking about intermittent, overwhelming fatigue during the day. My first thoughts were the usual things with fatigue: obstructive sleep apnea, poor sleep, hypothyroidism. None of those were the cause, so then I started thinking about less common disorders. I thought of possible narcolepsy. I started asking him about warning signs before the fatigue, and the only warning sign he could come up with was a foul smell he smelled. That triggered me to think about seizures and temporal epilepsy. Fortunately, we were able to make a correct diagnosis the next day, during a brain imaging. He had a CNS mass, a brain tumor. 

Two intensive care cases also come to me that are super zebras. In one, we had a terribly sick man. The only really striking thing about him was horrible conjunctival erythema. His exposures were uninteresting, although he did work for public transit and did have a lot of contact with messy, not terribly clean tracks. It was a more senior nephrologist who was seeing him as a renal consult, who looked at the guy and said, “This is leptospirosis.” It remains the only time I’ve ever seen leptospirosis, a rare bacterial disease that can cause organ failure.   

The second was a patient flown by helicopter from an outside hospital who had terrible pneumonia. We had taken the history and were treating him for all the usual suspects. A few days later, our infectious disease consult was talking to his wife and asked her if anything unusual had happened to him. The wife remembered that a week before, she had noticed a dead rabbit and asked her husband to “take care of it.” The man rode his lawn mower over the rabbit. It turned out he had tularemia, a disease especially prevalent in rabbits that can be passed to humans. He had tularemic pneumonia. We would have never known, but it was one person who didn’t even really have a diagnostic hypothesis, but who was diligent in figuring out all the possibilities to make the diagnosis. 

 In Cardio Ex for example, players narrow down diseases through reductive reasoning. Our diagnosis game mechanic is deeply integrated into the game's outer loops, in which a player's success in playing diagnosis levels earns achievements, experience points, and in-game progress.

Level Ex has developed various diagnostic experiences that help clinicians better identify rare diseases. In one rare disease experience, clinicians are led through the differential diagnosis of affected patients presenting with different sets of symptoms. The experience improves understanding of the disease and trains clinicians to include the disease in their early differential workflow.

Q. Is there enough training around rare diseases in early medical education? What about for older, experienced doctors as they go through their careers?

A. I think our medical schools do a good job in this area. We expose students to lots of different diseases in the classroom. We can’t expose them to everything. Residency to a great extent, is about getting really good at the common stuff because a very good doctor is one who does an excellent job diagnosing what’s common. I think maybe where we fail is that we need to work on the diagnostic reasoning side in our residents, to address where they make that thinking shift to more rare diseases. 

We also have to think about our continuing medical education for our more senior physicians. I think we should say, “Ok, you’ve mastered 100 diagnoses that you see commonly in your practice, let’s start extending those little by little.” Wouldn’t it be neat if at specialty medical conferences, they had educational sessions on 5 things that you may never see, but if you do see them, you might be the person who saves a life. And even if you don’t make the diagnosis, you understand what difficult-to-diagnose diseases look like, how you should frame the differential, and maybe how you start the evaluation and get it to the person who is going to make the diagnosis.

Q. How do you think new technology and advancing analytics will impact the identification of rare diseases? 

A.  Clearly, artificial intelligence (AI) is going to be an enormous boon in parts of medicine—not as a replacement for doctors, but as incredible decision support. We’ve already made huge progress with the ability to read images, read pathology, and AI will make our radiologists and pathologists better, sort of like automated readings of EKGs have made our cardiologists better. A radiologist will be able to get through more chest X-rays and more mammograms with the aid of AI. It’ll key up some things that may have been missed

However, the problems with using an artificial intelligence model for first line diagnostics are that patient histories tend to be very non-specific and have to be read with an understanding of who this patient is, how they express themselves, how concerned they are—all those things which I can’t imagine this generation of computers being able to do. Not to mention, the lack of replicability from user to user. If two doctors examine the same person, we are unlikely to report the same thing or recognize things in the same way. 

Where I think technology will really be able to help us is on those patients you’re seeing a second or a third time and you’re not making any progress. Being able to have a system where you can put symptoms, findings, epidemiology into the system, which will not give you a diagnosis, but will say, “here are three things you may not have thought about,” that’s going to be pulling up those rare diseases and potentially help lead you to the right diagnosis.

Level Ex Game Jam 2022

Level Ex recently honored their colleague Tom Huffman with a Game Jam that targeted Neuroendocrine Tumors (NETs), a rare disease initially misdiagnosed in 90% of NET patients. During Game Jams, Level Ex’ers design medical video games from scratch.

Q. What can clinicians improve on when it comes to rare diseases?

A. I think we can do a better job highlighting the patient’s role in their diagnosis. We’ve already talked about how it’s necessary to expand your thinking at that second or third visit if the patient is not improving. Part of that expansion could be listening more to the patient and taking note of what they are telling you that may be outside the norm. Hear about their symptom diary and address things that perhaps at the beginning, you did not feel were a productive area of the history. There may be something useful there if we listen more closely.

Thank you, Dr. Cifu. We’ve heard from an expert, now let’s hear from a patient. In the next part of our post, we speak with our own Mallory Pickard, Level Ex Strategy Director, about her personal journey with a rare disease and her road to the right diagnosis. 

Q. Tell us about your illness and when your symptoms began?

A.  Last April I was diagnosed with AOSD, which is adult onset Still’s disease, an autoinflammatory disorder. All my symptoms started in December, 2020. I never tested positive for Covid, but essentially every physician I talked to thought I had Covid. When I wasn’t getting better, they said, “We think you have long Covid, and that’s why you have so much fatigue and these fevers.” Weeks turned into a few months. I was seeing a neurologist and a cardiologist and a rheumatologist and all these folks to rule out all these different things. But it kind of got to the point when we had marked off the list all of these bigger conditions and the doctors still didn’t know what it was.

Q. What was the disease journey like for you, and how did the ongoing uncertainty impact you?

A. It was a whole rainbow of emotions. You get to a point where you’ve gone through so many tests and have invested quite frankly, from the patient perspective, a good amount of money trying to figure out what’s wrong with you, and the doctors keep saying, “Well, it’s not this and it’s not that.” When you hear that enough times, you wonder if this is psychological, if this is just me. And I literally would have to go look at pictures of the thermometer and other physical symptoms on my phone or MyChart, to remind myself that no, here’s the physical stuff that’s showing up, it’s real. So there’s that whole sort of psychological struggle with the unknown.You can hit low points and think, “I’m not even gonna bother anymore. Like I don’t want to hear one more person say, “Maybe you need to get more sleep.” Looking back, I was learning as I went that nothing is black and white in healthcare, and that being diagnosed with a rare disease is often a very long process of elimination.

Q. How was the diagnosis finally made?

A. A year and a half after my symptoms started, I was talking with a new rheumatologist about the journey I had been on. He said he’d just had a conversation with a physician colleague of his who is treating somebody with Still’s disease, and he said, “That’s what it sounds like to me.” So the only reason we started down this path and learned the diagnosis was because of his recent conversation with a colleague. The rheumatologist admitted that the only time he’d ever heard of AOSD was once in medical school. Until that day, I had never even heard of it, but it all made perfect sense.

Q. How are you today and what is your prognosis?

A. Well, some folks have symptoms of Stills all the time, and they need long-term medications or biologics to manage, I don’t have that. I do have regular fevers, but everything else is based on flare-ups, so I go through cycles that I’m learning to accept. Knowing I have AOSD gives me a framework for decision-making. There are the classic decisions like eating healthier and exercising enough, but I also know now to take rest more seriously, for example. It’s much easier to manage when you know what you’re dealing with and can start to understand your limitations.  

Q. Considering your disease journey, how does it feel working for a medical game studio, and how do you think our work can help patients such as yourself? 

A. The first thing that comes to mind is just that unique opportunity we have to leverage game mechanics that create memorability and recall for good. We know that rare diseases are sort of learned about and forgotten after medical school, like in the case of my AOSD and the rheumatologist who just happened to hear about it from a colleague. The fact that we can take a collection of rare diseases or just one rare disease type and build a game experience to help doctors with recall across the board to potentially shorten the time to diagnosis, that’s so compelling. Being able to step in and create awareness where we need to, make it easier for physicians to identify and recall lesser known diseases and insights is life-changing for so many patients.  

Level Ex has developed various games and experiences that help medical professionals recognize and more quickly identify uncommon conditions. To learn more about how we target rare diseases or to view our rare disease case study, contact us.

Inducing a Stroke in ChatGPT… Could AI help Unlock the Mysteries of the Human Brain?

Pictured above: Midjourney Prompt: android with an exposed fractal brain having a stroke. glowing. scientific. side view –v 4

One of the fascinating things about language models like ChatGPT is how they encode information.

You can delve into this in this prior article where we explore the internals of ChatGPT.

Two important takeaways from that post:

  1. ChatGPT’s implementation is designed to reflect the human brain. With a simulated artificial neural network with 175B connections as a major component of its architecture, ChatGPT is designed to behave similarly to the ‘wet’ neural network inside your skull (yes, there are still many differences)
  2. ChatGPT doesn’t store words like a ‘normal’ computer program does. There are no letters. It’s ‘vocabulary’ (for lack of a better term) is stored simply as relationships – words are related to other words with different affinities, essentially like the connections between neurons. Words are concepts- connected to other concepts. That’s it.

#1 and #2 result in a highly-optimized representation of human language. It wouldn’t surprise me that our evolution as a species has a resulted in a similar architecture.

My grandmother, of blessed memory, had a stroke in her early twenties (I understand it was due to a badly typed blood transfusion). It left her paralyzed on her right side, and took her years to relearn how to speak and write – skills that she never recovered completely.

A Thought Exercise

Here’s a thought exercise… one that I’d like to get around to in the coming weeks or months (unless a researcher wants to take the baton from me – I’ll gladly hand it off.)

What would happen if we took ChatGPT’s fully trained neural network of 175 billion connection weights, and just zero’d out a whole bunch of those values? 1% of them? 10%? 50%?

I’m curious if ChatGPT would behave like a human who just had a stroke – struggling to find certain words, producing gibberish in certain scenarios that it thinks is correct, etc.

What other symptoms might it exhibit?

A More Nuanced Approach

Now just zeroing out a bunch of connections is a naïve approach. With ChatGPT, every neuron in an 85,000-wide layer of the ANN is connected to every neuron in the next layer. The human brain doesn’t work that way. Most neurons have up to 100 connections to other neurons. The human brain also has a lot more ‘physical locality’ to consider – neurons usually don’t connect directly to neurons that are physically far away. ChatGPT achieves something similar to this with its layers, but I suspect that locality isn’t nearly enough. 

You’d want to run a simple traversal algorithm that takes a starting point in the network, and follows neural connections around (using parameter weights as a proxy for connectivity and proximity) to zero-out connections and simulate the ‘stroke’. Once in a while you’d want to take a random ‘jump’ to a ‘nearby’ neuron that isn’t directly connected.

What do you think would happen?

Might the results of this experiment yield clues to the nature of strokes in humans? The nature of how the human brain stores language?

Could similar experiments on more complex ANNs in the future reveal clues to the nature of other brain conditions – tumors, aneurysms, depression, OCD, etc.?

Midjourney Prompt: An explosion inside an android's head. fractal brain. --v 4

Midjourney Prompt: An explosion inside an android’s head. fractal brain. –v 4


This article was written by Level Ex CEO Sam Glassenberg and originally featured on LinkedIn

Read original article.
Follow Sam on LinkedIn
to learn more about advancing medicine through videogame technology and design

Part 2: Video Games And MLR: Compliance By Design

MLR approval is frequently a hurdle for life sciences companies when creating content and developing assets for their brands. In Part 2 of our MLR series, we explore four ways Level Ex utilizes our pharma industry experience, regulatory expertise, and cutting-edge game design to build games optimized for smooth MLR approvals.

This is the second blog post in Level Ex’s series on video games and MLR. Check out Part 1, a Q&A with Level Ex Senior MLR Product Manager Rachel Mansfield.

1. Tooling Flexible Tools: Ensuring Systems for Content Creation Support MLR Needs

In 2017, Level Ex worked with a particularly conservative pharma company to create an arcade-style mechanism of action (MOA) game to help explain one of their brand’s complex MOAs to anesthesiologists.  

During the process, a reviewer raised a theoretical concern: if a player could hit all of the targets, it could be misconstrued as a claim that the treatment is 100% efficacious for this particular indication. Not a problem. Level Ex engineers quickly adjusted the game’s code so that a specific percentage of molecules would not react. 

Since then, Level Ex has developed a dedicated MOA solution for our pharma clients. The experience instills a deep and enduring understanding of a treatment’s MOA and mechanism of disease (MOD) through interactive, mesmerizing puzzles.

Built-in MLR compliance is a core design pillar of the solution. Adjusting parameters such as the percentage of molecules that react in the game is an aspect that game designers can tweak at any time by simply turning a dial and without the need to write code. The flexibility to control molecular interactions is now in the hands of the designer. 

MLR compliance is a core design pillar of our MOA solution, which easily accommodates MLR-related requests and changes.

2. Employing a Deliberate Approach to “Scoring” 

Scoring is a useful tool that drives player behavior in games. When considering MLR, however, it’s important to avoid being too lax with awarding points for specific behaviors, unless those behaviors are backed by science. We want to refrain from presenting that certain actions will always result in certain patient outcomes or do not consider differential diagnoses/alternative treatment choices. Our team takes extra care with scoring rubrics for this reason. How?

Our designers have many options for flexibility when it comes to in-game points and scoring, which enable us to determine the best fit for each client’s experience. The key is to be smart about it, for example:

  • Use established guidelines, when possible—If there are published 3rd-party guidelines, such as those published by medical societies, it’s easy to score the player based on them. Adhere to the guidelines: Earn points. Defy the guidelines: Lose points or earn none.
  • Play with clinical study data—Many of Level Ex’s game mechanics support scoring based on clinical study data, providing the double benefit of MLR compliance while instilling knowledge of relevant safety and efficacy data.
  • Time as a scoring mechanic—It’s typically acceptable to reward a player for making a decision faster. This for example, reflects the reality of clinical practice where a trade-off decision needs to be made quickly.
  • Avoid scoring altogether—Not every game needs a scoring mechanic. There are plenty of highly successful games that involve no scoring at all.
In this module of Cardio Ex, scoring is based on adherence to guideline-directed medical therapy (GDMT) for heart failure.

In this module of Cardio Ex, scoring is based on adherence to guideline-directed medical therapy (GDMT) for heart failure.

3. When the Data Isn’t There, Use Abstraction 

In 2018, Level Ex was working with an ophthalmology brand on an MOA game where key aspects of the MOA remained unknown—a factor not uncommon for cutting-edge treatments. The MLR team was especially concerned about presenting details about the MOA that were based in theory, but yet to be confirmed by research. They knew the molecules combined somehow—they just weren’t sure exactly how.

Conveniently, video games have a method that comes in handy for these scenarios: Abstraction.

The design team proceeded to create an enthralling 3D puzzle game that presented the various molecules as abstract puzzle pieces. The pieces themselves were not shaped in any way representative of a molecule—but they did fit together in the right sequence. This elegantly facilitated the MOA narrative and enabled a smooth MLR process.

In this 3D puzzle game, molecules are presented as abstract puzzle pieces that fit together in the correct sequence. The solution facilitates the MOA narrative, while meeting MLR requirements.

In this 3D puzzle game, molecules are presented as abstract puzzle pieces that fit together in the correct sequence. The solution facilitates the MOA narrative, while meeting MLR requirements.

4. Keeping Game Design Agile at Every Step

Flexibility is another key design principle when optimizing games for MLR success. How do you build a game where at any point up until release, a third-party approver—legal, medical or otherwise—may come in and request changes?

Our video game and in-house MLR experts have plenty of combined experience in addressing late questions and making adjustments throughout the development and MLR process, without jeopardizing agreed timelines or budgets. 

In fact, ask any seasoned video game developer about last-minute requests and they won’t flinch. Anyone who has worked on a game based on an existing franchise (Call of Duty, Mario Bros) or third-party IP (movies, TV, sports) has faced a similar phenomenon: Creative approvals.

“We’ve been dealing with this challenge for 30 years—long before medical games were even a thing,” says Jason VandenBerghe, Level Ex Creative Director. “Once, when my team was designing a major AAA James Bond game, we found out late in the process that in the Bond universe, it’s a hard rule that Bond never shoots anyone in the back. How do you build a 3D spy shooter where you can’t sneak around and shoot someone in the back? In the end, we figured out how to make it work.”

Seasoned video game developers are skilled at making last-minute changes and working within flexible processes that account for third-party approvals.

Seasoned video game developers are skilled at making last-minute changes and working within flexible processes that account for third-party approvals. 

Similarly, Level Ex CEO Sam Glassenberg recalled that at his previous company, a famous actor retracted approval of his cartoon likeness two weeks before his team’s Hunger Games game was due to ship because the actor wanted to look 20 years younger.  

“What can you do? We used a reference from a movie he shot twenty years ago, and we quickly got it approved and shipped the game on time,” Glassenberg said. 

Taking this page from the video game industry playbook, our teams are unfazed when it comes to requests and changes during the MLR approval process. 

Of course, the preparation steps taken by our MLR experts on the frontend help set our clients up for success and ensure that most changes are properly addressed at the beginning of the process. 

The key is building flexible production pipelines and processes that account for third-party approvals. If applied correctly, agile development methodologies mitigate the schedule impact and project risk if approval is late or changes are needed.

These are just some of our key design approaches and methods that ensure our solutions are built for smooth MLR approval. Our toolbox of strategies combined with our experienced team of game design, MLR, and pharma industry experts, prepare us for any question, challenge, or twist that comes our way during the process.

Ready to learn more? Have an MLR question for our team? Get in touch. We’d love to hear from you.

Level Ex games are played by over one million  medical professionals across 13 medical specialties.  Level Ex works with the top med tech and pharma companies.

5 Steps to Turn Your Medical Device into a Video Game

5 Steps to turn your medical device into a video game

You’ve read the research and case studies. You’ve tried the demos. You’ve discovered a smarter way to engage, train, and drive adoption for your medical technology at scale. You’re ready to use the Neuroscience of Play to bring your technology to life. What happens next?

The process to create a game with Level Ex is iterative, ensuring that we hit the most important points you want to get across to your reps and customers, with multiple points for your feedback along the way. We’ll ask you every step of the way: What should the doctor walk away from this experience knowing, thinking, and remembering?

Here are the five steps to turn a medical device into a game.

1. Assemble Your Team

Timeframe: 2 days

Partnering with Level Ex means a hand-picked group of our experts become extended members of your team—game designers, 3D artists, and user experience experts, many of whom have worked on the most engaging consumer games like Words with Friends and Call of Duty. You’ll also partner closely with a biomedical engineer and medical professional with expertise in your speciality.

In many cases, you will know these Level Ex experts because they have been part of the process leading up to the project kick-off—and they likewise will already be familiar with your product and business. All that’s required on your end is to identify the subject matter experts on your team who will bring their deep knowledge base and experience to the table.

Level Ex Impact across case studies has increased revenue, increased conversion, improved device competency and has seen a 11x return on investment (ROI).

2. Identify Your Needs

Timeframe: 2 – 4 weeks

Medical Device Example

You wouldn’t build a medical device before having engineering blueprints and design—and that’s the same for our game technology. At this step, we build a high-level outline of the game that we will build together before we start the complex work of making 3D assets and writing code. The outcome is what we call a concept document, a high-level overview of the interactions the player (your customer or rep) will be completing during gameplay. 

Even though we will be familiar with your product by this point, we will make sure to sit down with you to gain as much understanding of your specific pain points as possible.

You might answer questions such as:

  • What are your best practices for getting content through your medical, legal, and regulatory (MLR) review process?
    • How long does it typically take and where should we adjust the schedule or schedule these reviews?
  • Is our initial understanding of your needs accurate and valid?
  • Are our initial assumptions of what 3D assets will be required aligned with yours?
    • What else can we identify together that would be needed?
  • How often are your subject matter experts available for consulting or review?

We will also be voracious consumers of your training materials and supporting documentation, and review any existing 3D files of devices you would like depicted in the game.

Initial CAD images are often large and not set up for a smooth game experience. Our team works to make them easier to handle from a graphical representation standpoint and to do “texturing” before importing into the game, as shown in the example above. Industry key opinion leaders you recommend might also provide insights. We will continue to build this outline throughout the process together. 

→ Legal / Regulatory Review (Optional): Some teams choose to bring the concept document through the review system as an advance preview of what’s to come. Others still find this too high-level for an official review. We will be your partner in helping you determine the right process, supporting you through it, and sharing best practices. 

Curious how a video game makes it successfully through the MLR approval process? Our Senior MLR Product Manager Rachel Mansfield explains in this blog post.

3. Build the Game

Timeframe: 2 – 4 months

Building the game, starting with CAD drawings

With the outline set, your product takes the stage. You will see the transformation of CAD drawings of your product into 3D replicas. While the drawings will be multidimensional, at this point they will be untextured and uncolored and serve as placeholders to help us better understand the scale, setting, and initial lighting of the environment. Your Level Ex team of 2D and 3D artists determine the best camera angles and general positioning of the assets to illustrate the learning objectives to the end user. 

At this step, we are defining the look and feel of the game. We will also lay out:

  • The storyboard depicting the steps of the experience and overarching narrative 
  • Sketches of procedures and additional technology, including any technological opportunities or constraints
  • Visuals, color palettes, and tone of voice in alignment with your brand and our outline 
  • Game solution mechanics, physics, and how objects will render on the player’s screen

→ Legal / Regulatory Review 1: This review tends to focus on analyzing what the players will actually experience—the ways they will be able to move a device, the reactions a patient might have, etc. Because many review systems are not equipped to review and approve a functioning interactive video game, we will work with your team to determine suitable alternatives. This can take the form of recorded playthroughs or capturing each moment of the experience in a storyboard document, or a similar format that works best for you. We often attend these review meetings at your request to help focus feedback and explain what the future version will look like.

Creating an interactive experience

With all of the art files ready and the scene set, your game designers and animators will now create the interactive experience. During this stage, we make final decisions on each interaction and response that will happen between player and procedure. We also refine the look and feel. We want to make sure that when someone plays the game, it is as true to what they experience as humanly possible.

Our developers then write the source code to bring everything to life: designing and rendering product and human models into the environment, integrating audio design and voice (if required), and crafting levels.

We also meticulously test for bugs and tweak code to correct any issues.

As always, much of the project’s time, energy, and resources are focused on reviews with you to ensure that all the game elements and mechanics meet your needs.

4. Finalize the Game

Timeframe: 2 – 4 weeks

Applying final textures, tweaking lights and testing all elements.

You are very close to bringing your game to your customers. In this phase, we’re applying final textures, tweaking lighting, and testing every possible element—but now we’re looping back to your end user. We will work with you on playtesting with your end users by having them play through the experience. Our user experience experts ask the early users detailed questions to gauge their perceptions, possible points of confusion, and where improvements can be made. They ask probing questions such as: “How would you rate the ease of the experience?” or “What do you think this button is asking you to do?” We document all of their feedback, share, and discuss it with you. Lastly, we incorporate their feedback or flag it as a potential future project we could work on together.

→ Legal / Regulatory Review 2: Our team will help support that final approval and provide the materials in the format that works best for your teams—video, documentation and beyond.

5. Launch the Game

Timeframe: The press of a button

Your video game is deployed across devices including mobile, tablet and desktop computers.

Your game is ready to be played by your users—your reps, prospective and current customers, doctors, and medical professionals that you identify. Likely, these stakeholders have experienced a traditional simulation (sim) in the past, but they have never seen anything like the customized medical video game now in their hands.

Depending on your use case, you may roll out the game to your sales force to use in their demos, embed on your website to drive leads with potential buyers, include it in on-demand training materials, or a combination. 

Regardless of who is using it or where, we’ve taken those last few weeks of development to infuse the game with that special Level Ex-tra. We’ve tweaked the art, audio, and animations to give the player a sense of purpose and gratification. Every time the player does something, we ensure it feels meaningful and the entire experience is magical.

Our team continues to support your deployment by assisting in each of these actions and providing best-practices with the tool. You will start receiving analytics reports based on the specific data that is being tracked in your game. Such analytics capture:

  • How many players have interacted with the experience and for how long
  • What actions players have taken in the game and whether those actions were successful
  • How many times players replayed the experience 

These analytics show you exactly what content the player was exposed to each time they played and allows you to see any trends and patterns. 

And there you have it: A behind-the-scenes glimpse into the evolution of a medical device becoming a video game. Ready to bring your device to life in a game?

We can’t wait to help you get started.

From minimally invasive surgery to complex procedures and robotic surgery, our medical device solutions help train clinicians across the practice spectrum and drive adoption in unparalleled ways. Want to see more of our med device experiences or learn how we can work together? Contact us.