Google AR and VR

Auto Added by WPeMatico

Early explorations with ARCore 1.0

Early explorations with ARCore 1.0

We recently launched ARCore 1.0 to give developers the ability to build powerful augmented reality apps that make your phone’s camera smarter. It works on over 100 million Android devices, on more than a dozen different device models, so now more people can use AR to interact with the world in inspiring new ways.
While it’s only been a few weeks since launch, developers are already publishing new ARCore experiences on Google Play, across gaming, shopping and home, and creativity.


For gaming, AR weaves the action right into the world around you, making the experience more immersive and unlocking a whole new way to play. Here are new titles built with ARCore:

giphy (3).gif

My Tamagotchi Forever

BANDAI NAMCO has released “My Tamagotchi Forever,” an experience in which players can raise Tamagotchi characters while building Tamatown, a virtual town you can play with in the real world.

TWDOW_KeyArt_4K_February2018 (1).jpg

Walking Dead Our World

Immerse yourself in the zombie apocalypse! Your mission, should you choose to accept it, is to defend your surroundings by fighting zombies in real-world environments. Walking Dead Our World is a great example of how to use Google Maps APIsand ARCore together to build a location-based AR game. It’s currently in pre-registration on Google Play, with a broader release planned soon.

TendARLandscape1 (2).png


Tender Claws created TendAR, a game that features Guppy, a virtual fish that responds to users’ facial expressions and survives by “eating” other people’s emotions. The game was created by combining ARCore with Google Cloud APIs, which provides computer vision and object recognition. You can read more about how they created the experience in this case study. TendAR will be available to download starting in July 2018.

Shopping & Home

Augmented reality can bring anything into your space, which helps when you’re trying to understand the size and scale of things before you buy or ship them. Here are a few experiences built by our retail partners to aid you in making smarter decisions:


Pottery Barn 360 Room View

With Pottery Barn’s AR app, you can view furniture in your room to see how it pairs with your existing pieces, change the color and fabric of furniture before deciding which looks best, and can purchase what you’ve picked out directly from the app.

giphy (4).gif

eBay is using AR to solve a specific challenge facing their community of sellers: what size shipping container is needed to send that product? With the “Which Box” feature in eBay’s app, sellers can visualize shipping boxes to determine which container size they need to send any product.

Sotheby’s Curate, Streem
If you’re shopping for a new home or need help maintaining yours, AR can also come in handy. With ARCore, Sotheby’s is changing the way people stage furniture in the real estate world, and the Streem app connects customers with professionals to solve household maintenance requests.


Over the last few months, we’ve been tinkering with experiments that show how AR can be used as a new creative medium for self-expression. We’ve worked with creators across different disciplines to explore what happens when AR is used by illustrators, choreographers, animators and more.

Now, we’re inviting more people to experiment with this technology through an app that lets you make simple drawings in AR, and then share your creation with a short video. The caveat: it’s “Just a Line.”

Make simple drawings in AR with Just a Line

We’re open sourcing the core code of the app so developers can use it as a starting point for their own ARCore projects, and we’re excited to see what people create with Just a Line. Download it on Google Play.

Anyone with an ARCore-enabled phone can jump into most of these experiences from the Play Store right now, and developers can get started building their own apps today.

Go to Source

Posted by amiller in Blog, Google AR and VR

Open sourcing Resonance Audio

Spatial audio adds to your sense of presence when you’re in VR or AR, making it feel and sound, like you’re surrounded by a virtual or augmented world. And regardless of the display hardware you’re using, spatial audio makes it possible to hear sounds coming from all around you.

Resonance Audio, our spatial audio SDK launched last year, enables developers to create more realistic VR and AR experiences on mobile and desktop. We’ve seen a number of exciting experiences emerge across a variety of platforms using our SDK. Recent examples include apps like Pixar’s Coco VR for Gear VR, Disney’s Star WarsTM: Jedi Challenges AR app for Android and iOS, and Runaway’s Flutter VR for Daydream, which all used Resonance Audio technology.

To accelerate adoption of immersive audio technology and strengthen the developer community around it, we’re opening Resonance Audio to a community-driven development model. By creating an open source spatial audio project optimized for mobile and desktop computing, any platform or software development tool provider can easily integrate with Resonance Audio. More cross-platform and tooling support means more distribution opportunities for content creators, without the worry of investing in costly porting projects.

What’s included in the open source project

As part of our open source project, we’re providing a reference implementation of YouTube’s Ambisonic-based spatial audio decoder, compatible with the same Ambisonics format (Ambix ACN/SN3D) used by others in the industry. Using our reference implementation, developers can easily render Ambisonic content in their VR media and other applications, while benefiting from Ambisonics open source, royalty-free model. The project also includes encoding, sound field manipulation and decoding techniques, as well as head related transfer functions (HRTFs) that we’ve used to achieve rich spatial audio that scales across a wide spectrum of device types and platforms. Lastly, we’re making our entire library of highly optimized DSP classes and functions, open to all. This includes resamplers, convolvers, filters, delay lines and other DSP capabilities. Additionally, developers can now use Resonance Audio’s brand new Spectral Reverb, an efficient, high quality, constant complexity reverb effect, in their own projects.

We’ve open sourced Resonance Audio as a standalone library and associated engine plugins, VST plugin, tutorials, and examples with the Apache 2.0 license. This means Resonance Audio is yours, so you’re free to use Resonance Audio in your projects, no matter where you work. And if you see something you’d like to improve, submit a GitHub pull request to be reviewed by the Resonance Audio project committers. While the engine plugins for Unity, Unreal, FMOD, and Wwise will remain open source, going forward they will be maintained by project committers from our partners, Unity, Epic, Firelight Technologies, and Audiokinetic, respectively.

If you’re interested in learning more about Resonance Audio, check out the documentation on our developer site. If you want to get more involved, visit our GitHub to access the source code, build the project, download the latest release, or even start contributing. We’re looking forward to building the future of immersive audio with all of you.

Go to Source

Posted by amiller in Blog, Google AR and VR
Experimenting with Light Fields

Experimenting with Light Fields

We’ve always believed in the power of virtual reality to take you places. That’s why we created Expeditions, to transport people around the world to hundreds of amazing, hard-to-reach or impossible-to-visit places. It’s why we launched Jump, which lets professional creators film beautiful scenes in stereoscopic 360 VR video, and it’s why we’re introducing VR180, a new format for anyone—even those unfamiliar with VR technology—to capture life’s special moments.

But to create the most realistic sense of presence, what we show in VR needs to be as close as possible to what you’d see if you were really there. When you’re actually in a place, the world reacts to you as you move your head around: light bounces off surfaces in different ways and you see things from different perspectives. To help create this more realistic sense of presence in VR, we’ve been experimenting with Light fields.

Light fields are a set of advanced capture, stitching, and rendering algorithms. Much more work needs to be done, but they create still captures that give you an extremely high-quality sense of presence by producing motion parallax and extremely realistic textures and lighting. To demonstrate the potential of this technology, we’re releasing “Welcome to Light Fields,” a free app available on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. Let’s take a look at how it works.

Capturing and processing a light field

With light fields, nearby objects seem near to you—as you move your head, they appear to shift a lot. Far-away objects shift less and light reflects off objects differently, so you get a strong cue that you’re in a 3D space. And when viewed through a VR headset that supports positional tracking, light fields can enable some truly amazing VR experiences based on footage captured in the real world.

This is possible because a light field records all the different rays of light coming into a volume of space. To record them, we modified a GoPro Odyssey Jump camera, bending it into a vertical arc of 16 cameras mounted on a rotating platform.

Left: A time lapse video of recording a spherical light field on the flight deck of Space Shuttle Discovery.
Right: Light field rendering allows us to synthesize new views of the scene anywhere within the spherical volume by sampling and interpolating the rays of light recorded by the cameras on the rig.

It takes about a minute for the camera rig to swing around and record about a thousand outward-facing viewpoints on a 70cm sphere. This gives us a two-foot wide diameter volume of light rays, which determines the size of the headspace that users have to lean around in to explore the scenes once they are processed. To render views for the headset, rays of light are sampled from the camera positions on the surface of the sphere to construct novel views as seen from inside the sphere to match how the user moves their head. They’re aligned and compressed in a custom dataset file that’s read by special rendering software we’ve implemented as a plug-in for the Unity game engine.

Light Fields.PNG

Recording the World with Light Fields

We chose a few special places to try out our light field-recording camera rig. We love the varnished teak and mahogany interiors at the Gamble House in Pasadena, the fragments of glossy ceramic and shiny mirrors adorning the Mosaic Tile House in Venice, and the sun-filled stained glass window at St. Stephen’s Church in Granada Hills. Best of all, the Smithsonian Institute’s Air and Space Museum and 3D Digitization Office gave us access to NASA’s Space Shuttle Discovery, providing an astronaut’s view inside the orbiter’s flight deck which has never been open to the public.  And we closed with recording a variety of light fields of people, experimenting with how eye contact can be made to work in a 6-degrees-of-freedom experience.

Try “Welcome to Light Fields”

VR video is a promising technology for exploring the world, and while they are still an experiment, light fields show a new level of how convincing virtual reality experiences can be. We hope you enjoy our “Welcome to Light Fields” experience, available now on Steam VR. Take the seven-minute Guided Tour to learn more about the technology and the locations, and then take your time exploring the spaces in the Gallery. This is only the beginning, and lots more needs to be done, but we’re excited about this step toward more realistic capture for VR.

Go to Source

Posted by amiller in Blog, Google AR and VR
Watch live performances at The FADER FORT from SXSW in VR180

Watch live performances at The FADER FORT from SXSW in VR180

For over 15 years, The FADER has introduced the world to new music artists at The FADER FORT, the global media company’s annual live music event at South by Southwest (SXSW). FADER FORT has been the breakout, must-do gig for famous artists including Cardi B, Dua Lipa, Drake and many others. The event gives emerging and global artists an intimate stage to experiment on and allows those in attendance to experience performances up close and personal. But with an intimate experience, only a lucky few are able to make it into the must-see show making it one of the most in demand events at SXSW.


To bring The FADER FORT experience to more fans, we partnered with The FADER to livestream performances by Saweetie, Bloc Boy, Valee, Speedy Ortiz, YBN Nahmir and other special guests in VR180 on YouTube. No matter where you are, you can watch live on YouTube via your desktop or mobile device, or using Cardboard, Daydream View or PlayStation VR.

With VR180, those not in attendance at The FADER FORT in Austin will be able to experience three dimensional, 4K video of the show, providing a more immersive experience than a traditional video and making you feel like you are there.

From March 14-16th, we’ll livestream the the best acts of the day in VR180 and a cutdown of each set—that can be viewed at any time.

Check out the calendar below, grab your headset and get ready to see some of the best new artists on the scene without ever setting foot in Austin. Visit Faderfor the full lineup. See you at the Fort!

Go to Source

Posted by amiller in Blog, Google AR and VR
Making a video game in two days with Tilt Brush and Unity

Making a video game in two days with Tilt Brush and Unity

Imagine you’re playing a video game, and you’re being attacked by a gang of angry space aliens. Wouldn’t it be great if you could just paint an object in 3D space and use it to defend yourself? A talented team of artists and game fanatics explored this very premise at Global Game Jam 2018, a game development hackathon. Seeing Tilt Brush as a fast, powerful and fun 3D asset creation tool, the team at Another Circus used the Tilt Brush Toolkit to create a virtual reality game in less than 48 hours.

“Pac Tac Atac” casts you as a space adventurer who has landed on an alien planet and needs to beam a rescue message into intergalactic space. But watch out, the locals are angry and in the mood to smash your transmitter. It’s up to you to keep them away!

What the aliens don’t know is that you’re armed with two cans of spray paint, that let you  magically draw any object in your imagination to defend yourself.


Once you’ve got your magic object, you can start fighting off the aliens with slices and dices, or by throwing your weapon and calling it back like a boomerang.


“Pac Tac Attack” was built using the Unity game engine, using art exclusively painted in Tilt Brush and exported as 3D models. Using Tilt Brush provided a number of benefits over traditional 3D modeling. For example, to make creating lots of aliens easy for the development team, they first drew different body parts (heads, torso, arms and legs) in Tilt Brush. In Unity, they randomly assembled alien bodies using the body parts they originally painted in Tilt Brush. By procedurally generating bodies in this way, they could easily scale assembling dozens of alien bodies with unique movement styles.


One of the biggest challenges the team faced was optimizing the Tilt Brush art for in-game performance. Given the amount of detail generated by each brush stroke, they improvised by creating assets with fewer strokes (like Jonathan Yeo and his 3D-printed bronze self-portrait), and using Mesh Simplify, a Unity extension, that allows developers to reduce the poly count of their 3D models.  

“Pac Tac Atac” is available for the HTC Vive now. Check out more here.

Go to Source

Posted by amiller in Blog, Google AR and VR
Announcing ARCore 1.0 and new updates to Google Lens

Announcing ARCore 1.0 and new updates to Google Lens

With ARCore and Google Lens, we’re working to make smartphone cameras smarter. ARCore enables developers to build apps that can understand your environment and place objects and information in it. Google Lens uses your camera to help make sense of what you see, whether that’s automatically creating contact information from a business card before you lose it, or soon being able to identify the breed of a cute dog you saw in the park. At Mobile World Congress, we’re launching ARCore 1.0 along with new support for developers, and we’re releasing updates for Lens and rolling it out to more people.

ARCore lockup

ARCore, Google’s augmented reality SDK for Android, is out of preview and launching as version 1.0. Developers can now publish AR apps to the Play Store, and it’s a great time to start building. ARCore works on 100 million Android smartphones, and advanced AR capabilities are available on all of these devices. It works on 13 different models right now (Google’s Pixel, Pixel XL, Pixel 2 and Pixel 2 XL; Samsung’s Galaxy S8, S8+, Note8, S7 and S7 edge; LGE’s V30 and V30+ (Android O only); ASUS’s Zenfone AR; and OnePlus’s OnePlus 5). And beyond those available today, we’re partnering with many manufacturers to enable their upcoming devices this year, including Samsung, Huawei, LGE, Motorola, ASUS, Xiaomi, HMD/Nokia, ZTE, Sony Mobile, and Vivo.

Making ARCore work on more devices is only part of the equation. We’re also bringing developers additional improvements and support to make their AR development process faster and easier. ARCore 1.0 features improved environmental understanding that enables users to place virtual assets on textured surfaces like posters, furniture, toy boxes, books, cans and more. Android Studio Beta now supports ARCore in the Emulator, so you can quickly test your app in a virtual environment right from your desktop.


Everyone should get to experience augmented reality, so we’re working to bring it to people everywhere, including China. We’ll be supporting ARCore in China on partner devices sold there—starting with Huawei, Xiaomi and Samsung—to enable them to distribute AR apps through their app stores.

We’ve partnered with a few great developers to showcase how they’re planning to use AR in their apps. Snapchat has created an immersive experience that invites you into a “portal”—in this case, FC Barcelona’s legendary Camp Nou stadium. Visualize different room interiors inside your home with Sotheby’s International Realty. See Porsche’s Mission E Concept vehicle right in your driveway, and explore how it works. With OTTO AR, choose pieces from an exclusive set of furniture and place them, true to scale, in a room. Ghostbusters World, based on the film franchise, is coming soon. In China, place furniture and over 100,000 other pieces with Easyhome Homestyler, see items and place them in your home when you shop on, or play games from NetEase, Wargaming and Game Insight.

With Google Lens, your phone’s camera can help you understand the world around you, and we’re expanding availability of the Google Lens preview. With Lens in Google Photos, when you take a picture, you can get more information about what’s in your photo. In the coming weeks, Lens will be available to all Google Photos English-language users who have the latest version of the app on Android and iOS. Also over the coming weeks, English-language users on compatible flagship devices will get the camera-based Lens experience within the Google Assistant. We’ll add support for more devices over time.

And while it’s still a preview, we’ve continued to make improvements to Google Lens. Since launch, we’ve added text selection features, the ability to create contacts and events from a photo in one tap, and—in the coming weeks—improved support for recognizing common animals and plants, like different dog breeds and flowers.


Smarter cameras will enable our smartphones to do more. With ARCore 1.0, developers can start building delightful and helpful AR experiences for them right now. And Lens, powered by AI and computer vision, makes it easier to search and take action on what you see. As these technologies continue to grow, we’ll see more ways that they can help people have fun and get more done on their phones.

Go to Source

Posted by amiller in Blog, Google AR and VR
Say hello to our third round of Jump Start creators

Say hello to our third round of Jump Start creators

Jump is Google’s platform for professional VR video capture. It combines high-quality VR cameras and automated stitching that simplifies VR video production and helps filmmakers create amazing content. We launched the Jump Start program so that creators of all backgrounds can get access to Jump cameras and bring their ideas for VR video projects to life.

We’re wrapping up the year for the Jump Start program, and it’s been great to see the diversity of creators around the world using Jump cameras for a whole range of projects, everything from Lions in Los Angeles to a tour of the ancient Roman Forum to a sci-fi movie set on a futuristic Lunar Base. You can check out some recently published pieces on YouTube. We also just announced our third round of Jump Start participants. Let’s take a look at the cool stuff they’re working on.


Aidan Brezonick (Director), Justin Benzel (Author), Ivanna Kozak (Producer, Laïdak Films), Antoine Liétout (Producer, Laïdak Films), and Ivan Zuber (Producer, Laïdak Films)

Locations: LA, USA; Chicago, USA; Berlin, Germany; Paris, France

The team is working on a story set in the French countryside. It follows Henry, an aggrieved inventor struggling to overcome the laws of physics by reversing entropy. 


Alvaro Morales

Location: Washington, D.C., USA

Alvaro’s the co-founder of the Family Reunions Project.  He’s working on a collection of immersive experiences centered on undocumented immigrants.


Amaury La Burthe

Location: Toulouse, France

Amaury is creative director of Novelab/Audiogaming.  He’s working with Corinne Linder on a hybrid live action and CGI project about modern-day circuses.


Becky Lane

Location: Ithaca, USA

As a filmmaker and sociologist, Becky is creating an interactive journey through the history of burlesque dance to discover its impact on U.S. culture and women’s sexual empowerment.


Carmen Guzmán

Location: Puerto Rico

Carmen Guzmán is a Puerto Rican filmmaker based in NYC. She’s exploring the impact Hurricane Maria had on Puerto Rico’s communication systems and culture.


DimensionGate (Ian Tuason)

Location: Toronto, Canada

Ian Tuason, founder of DimensionGate, has showcased his work at the Cannes Film Festival, and is shooting the pilot episode of a VR horror serial.


Dominic Nahr and Sam Wolson

Location: Zurich, Switzerland

Dominic and Sams’s film will explore the aftermath of the Fukushima Daiichi nuclear disaster in Japan.


Fifer Garbesi

Location: Oakland, USA

Fifer’s project will traverse the many offshoots of our lingual creation myth in a delicate interactive dance between viewer and journey.


Harmonic Laboratory

Location: Eugene, USA

The interdisciplinary arts collective Harmonic Laboratory is documenting TESLA: Light, Sound, Color, an original 90-minute theatre performance on the elusive physicist and inventor, Nikola Tesla.


iNK Stories

Location: Brooklyn, USA

iNK Stories is a Story Innovation Studio. They’re working on the immersive experience Fire Escape and the large-scale VR installation, HERO (premiering at Sundance).


Lisa London

Location: San Francisco, USA

Lisa is producing “Keep Tahoe Blue,” a look at the successful environmental monitoring organization. It’s a piece on community, volunteerism, and making a difference.


Lizzie Warren

Location: Brooklyn, USA

Lizzie co-founded AROO, a feminist VR collective. A documentary filmmaker, one of Lizzie’s current VR projects explores the human/animal relationships within a wolf sanctuary.


Majka Burhardt and Ross Henry

Locations (Respectively): Jackson, USA; Chagrin Falls, USA

Majka and Ross share a VR journey about the power of one mountain and the water that takes you from the summit of Mount Namuli, Mozambique to the Indian Ocean.



Location: Venice, USA

More than 50 creators are coupling neurofeedback with stunning VR video to unlock creativity by training people to consciously control their state of mind in any environment.


MeeRa Kim & Michael Henderson (Arbor Entertainment)

Location: Los Angeles, USA
The Arbor team is working on several projects including a 360 exploration of dance and music from the 1920s through present day.


Noam Argov

Location: San Francisco, USA

Noam is a producer and National Geographic Explorer. Her team will use VR to get an inside look into the life of a Kyrgyz nomad as he pioneers a new adventure sport: horse-backcountry-skiing. 


Sarah Hill

Location: Columbia, USA

The StoryUP XR team is creating a brain-controlled VR experience where you conduct a handbell orchestra with your positive emotions.


Sherpas Cinema

Location: Whistler, Canada

The team is working on an experience that will you on a guided heli-ski trip deep into the backcountry. High adrenaline, no crowds, and all the untouched powder you could ask for.

Go to Source

Posted by amiller in Blog, Google AR and VR
Go behind the scenes of “Isle of Dogs” with Pixel

Go behind the scenes of “Isle of Dogs” with Pixel

“Isle of Dogs” tells the story of Atari Kobayashi, 12-year-old ward to corrupt Mayor Kobayashi. When, by Executive Decree, all the canine pets of Megasaki City are exiled to a vast garbage-dump, Atari sets off alone in a miniature Junior-Turbo Prop and flies to Trash Island in search of his bodyguard-dog, Spots. There, with the assistance of a pack of newly-found mongrel friends, he begins an epic journey that will decide the fate and future of the entire Prefecture.

The film isn’t out until March 23—but Pixel owners will get an exclusive sneak peek this week.

In “Isle of Dogs Behind the Scenes (in Virtual Reality),” the audience is taken behind-the-scenes in a 360-degree VR experience featuring on-set interviews of the film’s cast (voiced by Bryan Cranston, Bill Murray, Edward Norton, Liev Schreiber, Jeff Goldblum, Scarlett Johansson, Tilda Swinton, F. Murray Abraham and Bob Balaban). Get nose-to-nose with Chief, Boss, Rex and the rest of the cast while the crew works around you, for an inside look at the unique craft of stop-motion animation.

Pixel’s powerful front-firing stereo speakers and brilliant display make it perfect for watching immersive VR content like this. Presented in 4K video with interactive spatial audio that responds to where you’re looking, “Isle of Dogs Behind the Scenes (in Virtual Reality)” is a collaboration between FoxNext VR Studio, Fox Searchlight Pictures, Felix & Paul Studios, the Isle of Dogs production team, and Google Spotlight Stories.


“Isle of Dogs Behind the Scenes (in Virtual Reality)” is available today on the Google Spotlight Stories app, exclusively for Google Pixel phones (Pixel and Pixel 2) and best watched on the Daydream View headset. To watch, download the Spotlight Stories app.

On March 2, “Isle of Dogs Behind the Scenes (in Virtual Reality)” will become available in VR, 360 and 2D via YouTube VR and Fox Searchlight YouTube channel, and any platform that has the YouTube VR app, including Daydream and Sony PlayStation VR. “Isle of Dogs,” from Fox Searchlight, hits theaters on March 23.

Go to Source

Posted by amiller in Blog, Google AR and VR, Pixel
Augmented reality on the web, for everyone

Augmented reality on the web, for everyone

In the next few months, there will be hundreds of millions of Android and iOS devices that are able to provide augmented reality experiences – meaning you’ll be able to look at the world through your phone, and place digital objects wherever you look. To help bring this to as many users as possible, we’ve been exploring how to bring augmented reality to the web platform, so someday anyone with a browser can access this new technology. In this post, we’ll take a look at a recent prototype we built to explore how AR content could work across the web, from today’s mobile and desktop browsers, to future AR-enabled browsers. Techies, take note: the last section of the post focuses on technical details, so stick around if you want to dig deeper.

How the prototype works

Article is a 3D model viewer that works for all browsers. On desktop, users can check out a 3D model—in this case a space suit—by dragging to rotate, or scrolling to zoom. On mobile the experience is similar: users touch and drag to rotate the model, or drag with two fingers to zoom in.

The desktop model viewing experience

To help convey that the model is 3D and interactive—and not just a static image—the model rotates slightly in response to the user scrolling.


With augmented reality, the model comes alive. The unique power of AR is to blend digital content with the real world. So we can, for example, surf the web, find a model, place it in our room to see just how large it truly is, and physically walk around it.

When Article is loaded on an AR-capable device and browser, an AR button appears in the bottom right. Tapping on it activates the device camera, and renders a reticle on the ground in front of the user. When the user taps the screen, the model sprouts from the reticle, fixed to the ground and rendered at its physical size. The user can walk around the object and get a sense of scale and immediacy that images and video alone cannot convey.

Article’s AR interface as viewed on an AR-capable tablet

To reposition the model, users can tap-and-drag, or drag with two fingers to rotate it. Subtle features such as shadows and even lighting help to blend the model with its surroundings.

Moving and rotating the model

Small touches make it easy to learn how to use AR. User testing has taught us that clear interface cues are key to helping users learn how AR works. For example, while the user waits momentarily for the system to identify a surface that the model can be placed upon, a circle appears on the floor, tilting with the movement of the device. This helps introduce the concept of an AR interface, with digital objects that intersect with the physical environment (also known as diagetic UI).

Diagetic activity indicators hint at the AR nature of the experience

Under the hood (and on to the technical stuff!)

We built our responsive model viewer with Three.js. Three.js makes the low-level power of WebGL more accessible to developers, and it has a large community of examples, documentation and Stack Overflow answers to help ease learning curves.

To ensure smooth interactions and animations, we finessed factors that contribute to performance:

  • Using a low polygon-count model;

  • Carefully controlling the number of lights in the scene;

  • Decreasing shadow resolution when on mobile devices;

  • Rendering the emulator UI (discussed below) using shaders that utilize signed distance functions to render their effects at infinite resolution in an efficient manner.

To accelerate iteration times, we created a desktop AR emulator that enables us to test UX changes on desktop Chrome. This makes previewing changes nearly instant. Before the emulator, each change—no matter how minor—had to be loaded onto a connected mobile device, taking upwards of 10 seconds for each build-push-reload cycle. With the emulator we can instead preview these tweaks on desktop almost instantly, and then push to device only when needed.

The emulator is built on a desktop AR polyfill and Three.js. If one line of code (which include the polyfill), is uncommented in the index.js file , it instantiates a gray grid environment and adds keyboard and mouse controls as substitutes for physically moving in the real world. The emulator is included in the Article project repo.


The spacesuit model was sourced from Poly. Many Poly models are licensed under Creative Commons Attribution Generic (CC-BY), which lets users copy and/or remix them, so long as the creator is credited. Our astronaut was created by the Poly team.

Article’s 2D sections were built with off-the-shelf libraries and modern web tooling. For responsive layout and typography and overall theme, we used Bootstrap, which makes it easy for developers to create great looking sites that adapt responsively across device screen sizes. As an nod to the aesthetics of Wikipedia and Medium, we went with Bootswatch’s Paper theme. For managing dependencies, classes, and build steps we used NPM, ES6, Babel and Webpack.

Looking ahead

There’s vast potential for AR on the web—it could be used in shopping, education, entertainment, and more. Article is just one in a series of prototypes, and there’s so much left to explore—from using light estimation to more seamlessly blend 3D objects with the real world, to adding diegetic UI annotations to specific positions on the model. Mobile AR on the web is incredibly fun right now because there’s a lot to be discovered. If you’d like learn more about our experimental browsers and get started creating your own prototypes, please visit our devsite.

Go to Source

Posted by amiller in Blog, Google AR and VR
Travel through time with Pepsi and WebVR

Travel through time with Pepsi and WebVR

Ah, the Super Bowl—come for the action, stay for the commercials. This year, as part of its “Pepsi Generations” global campaign, Pepsi will extend its TV commercial into virtual reality.

Pepsi’s new commercial, “This is the Pepsi,”  takes viewers on a journey through some of the brand’s most iconic moments. In VR, Pepsi fans can remember those moments, and  feel what it was like to be there.

That’s why we collaborated to create “Pepsi Go Back,” a WebVR experience where fans travel through time and step into Pepsi commercials that became some of their biggest pop culture milestones.

Hop into the driver’s seat of Jeff Gordon’s car and hold on tight as you race against the “Back to the Future” DeLorean. 


Then, zip to 1992, and explore the Halfway House Cafe, where Cindy Crawford dazzled fans in one of the most famous commercials of all time.


In both environments, you can look around, interact with different parts of the experience and unlock cool stuff.


WebVR enables anybody with a desktop or mobile device to experience immersive content. This made it the ideal technology to take Pepsi’s fans on this nostalgic journey. Check out “Pepsi Go Back” on your smartphone with a VR headset like Cardboard or Daydream View, on Chrome, or any a desktop browser that supports WebVR.  And if you find yourself in Minneapolis for the game, stop by the Pepsi Generations Live pop up event for a demo with Daydream. 

Go to Source

Posted by amiller in Blog, Google AR and VR