What I’m digging right now
Moo
Walter
OscillSpin

Storytelling with virtual reality

Disclaimer: Since this project was started and executed between 2015-2016, some of the technology is outdated. But the experience and best practices learned is worth sharing.

2014-2016 was an explosive few years for Virtual Reality, and especially for VR video. A few of us from Applied Innovation (now Thomson Reuters Labs) kept tabs on all the VR activity, and around that time VR was THE buzzy thing going around in the Bay Area. My team in the San Francisco office couldn’t help but be exposed to it all. What started out as curiosity and experimentation — ended up creating a video in VR about a cultural event and its social impact, presented it to the local community in San Francisco, and developed comprehensive set of guidelines on VR video and storytelling.

Cool stuff but kind clunky

My husband had got his hands on the Dev Kit 2, and we played around with it pretty extensively. But as most VR experiences then, we rarely went back and replayed anything. The set up was pretty inelegant with the headset and cords needed to run the thing. The experiences themselves felt more like really nice demos. I wasn’t much of a gamer, and the majority of the apps were games. After a couple months it collected dust in our closet.

Expectations.

Reality.

Cool content and curiosity piqued

Then 2014, Google released the Cardboard. It immediately caught my attention with its very low price tag (or free if you made it yourself), and the accessibility and approachability that came with it.

And again while a lot of the apps were games, photography and video in VR were gaining traction and some newsrooms were looking to it as a viable form of media and storytelling. The NYTimes (in collaboration with JR and then Vrse, now With.in) published Walking New York, and I was blown away. The story was beautiful, and the 360° video made it truly immersive. Soon after The Guardian published Welcome to Your Cell, and used the same medium to create a totally unexpected experience.

I started off light and played around with 360° panoramas using Google Sphere. Wherever I went I snapped a pano; it was really easy to do. Most turned out terrible. Handful were okay. Below is probably my only decent pano.

I was taking a Neon & Light workshop at the Crucible in Oakland, and we had to present our neon creation as an art piece. Since I was already playing with 360° photography, naturally I combined the two. Plopped the neon light, I haphazardly created, onto the floor of my bedroom, turned the lights off, and shot the pano with Sphere.

The green light

The buzz could no longer be ignored, and the Applied Innovation team was finally given the green light to experiment with VR video, share our findings, and produce an end product. The teams in London, Eagan, and my team in San Francisco focused on different things though we collaborated and knowledge shared quite a bit.

The San Francisco team focused on creating VR video content from start to finish. This included shooting, editing, stitching, uploading the final video to YouTube, and making it work on Cardboard. We poured over whatever material we could get our hands on, and did a proper deep dive on VR:

  • Understanding the technology and how it works

  • The difference between 360° vs VR

  • VR headsets

  • VR video cameras

  • Shooting equipment

  • Stitching software

  • Best practices

Our first (failed) attempt

We gathered all the GoPros we could find, 3D printed a VR video rig from Thingiverse using my husband’s 3D printer, and set out to film some test shots.

3D printed video rigs

The rig used six GoPros and positioned the cameras in a sphere+cube shape. When the footage from all cameras are properly stitched, they take the shape of a sphere creating a proper 360° video. One camera is designated as the parent and the others as children. And with a remote, you initiate the parent camera to record, and the others fire off.

Since we sourced the cameras from family and friends, we were working with different models and resolutions. After much trial and error, we did manage to get all six to record in sync, but never managed to stitch together a clean shot. A few takeaways from our experience:

  • Use the same model of GoPro for all cameras. This ensures easier syncing and same resolutions.

  • Make sure the cameras are all using the same resolution. If just one is off, the footage is unusable. The software will either not stitch the footage, or the final video will be warped.

  • Label the cameras. Easier to keep track of them.

  • Record in a large open space.

  • Less movement the better

  • Positioning a VR rig is different from positioning a regular video camera. Be aware of all surroundings, like top and bottom. Position rig as central as possible.

  • Stitching VR video is painful, which I’ll get into later.

2015 Notting Hill Carnival

Every August, a million people flock to a wealthy West-London neighborhood for Europe’s biggest street party: the Notting Hill Carnival. Started by immigrants confronting racism, Carnival has brought the spirit of the Caribbean to England, rain or shine, for over fifty years. And in August 2015, we collaborated with the Thomson Reuters Foundation to create a short film in VR to highlight just that.

We filmed the 2015 Notting Hill Carnival with a crew of three. The filming and post-production proved challenging and we derived huge lessons from the experience.

First, we needed the right equipment

We had to choose which VR headset to target, acquire the appropriate film equipment for a VR format, and pick which software to edit the film.

Pick a headset. Our options were:

In the end we chose Google Cardboard because of its low costs and it being the least complicated to use. We also wanted to leverage YouTube’s 360° video player.

Once we chose the headset, we needed to decide on a video camera set up. We were deciding between the Ricoh Theta and the Freedom GoPro 360.

We almost went with the Ricoh because of cost, availability, and no video stitching is required. At the time, a lot of the GoPro rigs were unavailable or slow to ship, and extremely expensive — but the output was a much better video quality. Luckily our film crew in London were able to rent one, along with monopods (vs. tripods since they can get in the shot), and mics to capture the atmospheric audio.

Choosing the right software

Since we decided to use a camera rig, we needed to use software to stitch the footage, and then software to edit the final video:

  • We chose VideoStitch (now Orah, though not sure if they’re still active) because it was affordable and technically faster since it uses the machine’s graphics card vs. the CPU to process the stitching.

  • I used After Effects to edit the final video. But that’s a personal preference. Final Cut Pro or Adobe Premiere would have been just fine.

The stitching. Oh, the stitching.

VideoStich in action

VideoStitch was pretty painful to use. In hindsight, we probably should’ve chosen something like Kolor because we had a lot of graphics card compatibility issues within our team, leaving my machine as the only one workable.

If you’ve got 10 scenes to stitch, and you used a six camera rig. You’ve got 60 video files to stitch. And this is after you’ve reviewed all the footage and narrowed it down to those 10. For us, I think it was something like 30 scenes, narrowed down to half, and then to three. 

As far as video stitching technology, what was available then, was not that great:

  • They were slow.

  • The synchronization wasn’t the best.

  • The UI wasn’t intuitive.

  • They were buggy and crashed all the time.

But they did the job. It just sucked when I had been working on a scene and I wasted a good hour or two fixing the syncing because the software messed it up — and then it crashes. And there was no auto-save and lost everything. Fun!

But the pain didn’t stop there.

There was still one more step before uploading it to YouTube. After exporting the final video in After Effects, it lost the VR metadata. Google specifically created an app to inject this metadata back.

This is what it looked like at the time.

After all the blood, sweat, and tears

The video was finished. From the experience, we picked up a few best practices:

  1. Keep the camera movement slow and steady. The viewer controls the movement. This will also minimize VR sickness.

  2. The camera should be placed thoughtfully and carefully. Think about the subject the scene you are trying to create.

  3. With a screen only inches away, the viewer can see every pixel. Resolutions are held at a different standard with VR.

  4. Like camera movement, keep the editing steady and smooth.

  5. Type is magnified in VR. Need to keep it small, centered, and short so the viewer to see and read it.

Check out the final output for yourself.

Speaking at Runway SF

As mentioned earlier, there was a lot of VR activity going on in the Bay Area. But nobody was talking about the process of creating stories with VR. We were in a unique position wanted to share our experience and findings. We organized an event at Runway in San Francisco called “Creating Stories in Virtual Reality.” You can download the deck with all my notes here.

That’s me speaking at Runway, and waving my hand awkwardly.

We got a lot of great feedback and engagement from the audience. Eventually we presented it at our local office in San Francisco, and soon after held a webinar with the Thomson Reuters UX Council and presented it to the entire UX community at Thomson Reuters.

Guidelines for Reuters News

With the positive feedback from the presentations, we worked with the Reuters News SF Bureau, and created the Virtual Reality Design Guidelines as an internal source for Reuters News (and anyone at Thomson Reuters) to reference if and when creating VR video content. You can view Google Doc version here.


Credits

Team lead: Marine Leroux-Thibault and Kate Boeckman
Creative direction, video editing, writer: Jennifer Lee
VR video footage: Thomson Reuters Foundation

“Big Open Linked Data” - a poster series

Life of a Task