Sem 2 Round up: Education

  Education. Welcome to the second of four parts rounding up the entirety of my semester 2. Due to the nature of my work, a lot of the time I won't be able to discuss/ disclose the project until a certain amount of time or until the project is finalised, this is why there has been a lack of posts over the past months. Education Is one of the main focuses of my MA. As I venture into the depths of Unreal Engine's limitless potential, I've found so many was to teach and educate, not only younger demographics, but also educate older generations. 

The first one is a light project. This short demo was created using Facebook's amazing Spark AR studio, this allows creators and designers to develop small and simple AR experiences straight through Facebook an instagram's camera. This is a brilliant, node based, piece of software that can create some really power filters. My first few experiences with Spark were very clunky, however after a few tries with their image based AR tracking I was able to create this brilliant little interactive AR map for the university, where the students could point the filter at their student card and show them key locations around the campus. 

My next project was a real fun proof of concept demo. Essentially we were tasked with creating an augmented reality application that can be used explain and label parts of the human anatomy. For my initial design I wanted to create an intractable AR Skeleton that the player can move around, as well they can view 
the different layers of the body by cutting back different layers such as skin, muscles, organs, bones etc. 

I was able to source some medically accurate organ and skeletal models and created a very basic demo where the player is able to interact with the skeleton by controlling their movement but also stripping back layers of the bones in order to view the vital bodily organs. 

Finally, the lungs project. this is an ongoing project that has been in development for nearly a whole year now. The main goal behind this project initially was to develop a VR platform experience where a user can go into a special booth where they're immersed in a realistic environment that made you feel as if you're encapsulated in a human body. However, due to the hit of covid, having multiple players using and swapping headsets it isn't a viable option. Since then we've ported the idea to an more mobile AR platform where the users will augment a bodily organs in front of them in realtime and be able to see the realistic affects of smoking on the human body. As this project is still heavily in the development phase I'm only able to release a few pieces of information, but here are a few screenshots of the development. 

Off the back of this, we demo'd some of this work to a higher up at the university and they were really impressed, they made a passing comment on how it would be amazing if we could show and digitally view data currently exists for patients with diseases such as COVID. Off this I did some research into the viability of accessing CT and MRI scans of patients actual scans. From this we found a GitHub repository with over 100GB of scans and data that we can use for medical research, I had an idea.

Prior in my previous semester I was playing around with alternative rendering techniques such as ray marching for volumetric visualisation. I realised that I could create 3D volumetric clouds using this data, and because non of it is rendering polys or traditional static meshes It would be able to run on lightweight devices such a the Hololens, oh yeah I got a hololens at this point, I'll talk about that in the next post. 

So I began to create an automated pipeline on my spare PC. Skip the next 2 paragraphs, but stick around if you want to listen to the manic ramblings of a codeine and caffeine fuelled idiot. Ok so get this yeah? We take the data that's in a propritary format, use a nodeJS script which converts it into individual sliced images at maximum resolution, as these scans are like taking loads of little images at set intervals of an organ. Imagine a loaf of bread that's sliced up, and then you remove a slice and take a picture, then so on until you've got the whole loaf yeah? It's the same thing but with your brain. basically once we've got the 300 images or so, we take each of these and then we created a photoshop batch automation which would normalise the black levels and white levels in order to create a sharp contrast from the tissue and bone density, Imagine these images have to create a cloud where white is 100% dense and black is nothing there, if all the images were at different black levels it would cause a weird shadowing effect. Anyways after this we created a little python script which figured out how many images there were and created an old school sprite sheet out of the images and padding it out to fit 8192x8192 as this is the largest texture we can bring into UE4. 

So we now have some big 1.5gig tiff files, now what? Basically using a custom nodes in ue4's material editor ad some HSLS I was able to create a custom volumetric raymarcher that takes the images cuts them up into slices and then interperates that data as volumetric opacity. Add some lighting code here and there and the ability to change the slice you're on so you can skim through the organ and voila. Brain.

In this demo above I've created a few MR sliders that allow you to adjust the actual scan step you're on, but also change the material Aniso which helps differentiate bone density from tissue but also highlight certain parts of the organ. Below is another demonstration, but using CT scan data from a subject that has COVID19.

Finally, here's a short demo of a VR game that we built over the summer for the criminology course. They wanted a short game that allowed the player to relive and play VR recreations of famous law cases set in a period time. We spent a lot of time optimising and building this game to run on Quest as its a great platform for low budget VR gaming.


Popular Posts