If AR Core was ever asked — “Tell us about yourself in 10 mins”

Source

I have been working on AR Core since the last 5 months now and it never left a chance to excite me with the beauty of it’s working architecture. So, today I decided to elaborate AR Core in details to the world. Here, I will be introducing to the basics concepts of augmented reality and some other context such as — how and why it was developed, and how it compares to and differs from its technological cousin, virtual reality,the hardware needed to view AR content, and how people are using AR today.

What is Augmented Reality?

I will explain how I understand augmented reality. According to me “Augmented reality is the interactive experience of us human being with the real world scenario around us. In this process the real world objects mapped by the device camera are augmented by the perceptual information received by the augmented reality device. The quality and parameter of the perceptual information may vary from device to device.”

Source

How AR differs from it’s technical cousin Virtual Reality(VR)?

  1. Augmented reality is direct or indirect live view of a physical, real-world environment whose elements are “augmented” by computer-generated perceptual information. Virtual reality is the use of computer technology to create a simulated environment, placing the user inside an experience.
  2. The most obvious difference is in the hardware itself. A Virtual Reality experience must be viewed in some kind of headset, whether it’s powered by a smartphone or connected to a high-end PC. VR headsets require powerful, low-latency displays capable of projecting complete digital worlds without dropping a frame. Augmented Reality technology does not share this requirement. You can hold up your phone and have a headset-free Augmented Reality experience any time.
  3. Both technologies enable us to experience computing more like we experience the real world; they make computing work more like we do in regular life in a 3D space. In terms of how the two technologies are used, think of it like this. Virtual Reality transports you to a new experience. You don’t just get to see a place, you feel what it’s like to be there. Augmented Reality brings computing into your world, letting you interact with digital objects and information in your environment.

“Generally speaking, this difference that makes AR a better medium for day-to-day applications is due to the fact that users don’t have to shut out the world to engage with the AR Content as compared to VR.”

Here are the links to the videos from two popular YouTube channels ColdFusion and TechAltar which will guide you in making your concepts more sound and clear.

Link to the ColdFusion Video:-

Link to the TechAltar video:-

Now I will introduce you to AR Core!!

Introduction

AR Core is a Google’s open-source platform which was released in 2017. AR Core is awesome package for building augmented reality(AR) experiences for mobile devices or smartphones. AR Core uses different APIs which enables any AR compatible phone to sense environment and understand the world around us and interact with information perceived.

The three key components which AR Core uses for integration of virtual contents with the real world scenario as seen through phone’s camera are mentioned below:

  1. Motion tracking allows the phone to understand and track its position relative to the real world.
  2. Environmental understanding allows the phone to detect the size and location of all type of surfaces: horizontal, vertical and angled surfaces like the ground, a coffee table or walls.
  3. Light estimation allows the phone to estimate the environment’s current lighting conditions.
Source : Google I/0 presentation
  • Link below will help you discover AR Core in details and the resources which will help in getting your hands dirty with AR Core SDK for different platforms:-
  • To visualize AR Core have a look at this video:-
Introductory video to AR Core

Time for getting aware about components which is responsible for AR experience a success!!

Now l will dive into the hardware components inside mobile devices that power augmented reality, and discover ways in which AR assets can feel real and keep users immersed. I will introduce to the AR Core features that help make a digital object behave as though it exists in a real world space, as well as a few constraints facing AR today.

Components which enable motion tracking for Augmented Reality

  1. Accelerometer measures acceleration, which is speed divided by time. Simply put, it’s the measure of change in velocity. Acceleration forces can be static/ continuous like gravity or dynamic, such as movement or vibrations.
  2. Gyroscope measures and/or maintains orientation and angular velocity. When you change the rotation of your phone while using an AR experience, gyroscope measures that rotation and AR Core ensures that the digital assets respond correctly.
  3. Phone Camera with mobile AR. Here your phone camera supplies a live feed of the surrounding real world upon which AR content is overlaid. In addition to the camera itself, AR Core-capable phones like the Google Pixel rely on complementary technologies like machine learning, complex image processing, and computer vision to produce high-quality images and spatial maps for mobile AR.
Source : Screenshot from AR Core’s Coursera presentation slides

Components which enable location-based Augmented Reality

  1. Magnetometer gives smartphones a simple orientation related to the Earth’s magnetic field. Because of the magnetometer, your phone always knows which direction is North, allowing it to auto-rotate digital maps depending on your physical orientation. This device is key to location-based AR apps.
  2. GPS a global navigation satellite system that provides Geo-location and time information to a GPS receiver, like in your smartphone. For AR Core capable smartphones, this device helps enable location-based AR apps.
Source : Screenshot from AR Core’s Coursera presentation slides

Enables: view of real world with AR

Display on your smartphone is important for crisp imagery and displaying 3D rendered assets. For instance, Google Pixel XL’s display specification is 5.5" AMOLED QHD (2560 x 1440) 534 ppi display, which means that the phone can display 534 pixels per inch-making for rich, vivid images.

Now we will get familiar with features and functionalities of AR Core

Get ready!! Now we will focus on the state of AR. Below are the few questions which might be hitting you mind at this point of time.

Who’s behind it?? how effective is the technology?? what hurdles need to be overcome??

Don’t worry at all. Hopefully contents below will overcome all your curiosity regarding AR Core.

How AR Core tracks?

Let’s now talk about tracking. AR relies on computer vision to see the world and recognize the objects in it. The first step in the computer vision process is getting the visual information, the environment around the hardware to the brain inside the device. The process of scanning, recognizing, segmenting, and analyzing environmental information is called tracking, in immersive technologies.

For AR, there’s two ways tracking happens, inside-out tracking & outside-in tracking.

  1. Outside-in tracking

In case of Outside-in Tracking, cameras or sensors aren’t housed within the AR device itself, instead they are mounted elsewhere in the space. Typically, they are mounted on walls or on stands to have an unobstructed view of the AR device. They then feed information to the AR device directly or through a computer. The external cameras or sensors can be as large as you want, at least theoretically. Outside-in Tracking overcomes some of the space and power issues that can occur with AR devices. Suppose while tracking your headset loses connection to the outside sensors for even a moment, then they can lose tracking. Due to that the visuals will suffer breaking immersion.

2. Inside-out tracking

In case of Inside-out tracking, cameras and sensors are built right into the body of the device. Smartphones are the most obvious example of this type of tracking. They have cameras for seeing and processors for thinking in one wireless battery-powered portable device.

On the AR headset side Microsoft’s HoloLens is another device that uses inside-out tracking in AR. The HoloLens frames include five cameras for analyzing the surrounding environment, one camera for measuring depth, one HD video camera, one light sensor and four microphones but all that hardware takes up space, power, and generates heat. The true power of standalone AR devices will emerge when they become as ubiquitous and as useful as smartphones. In the meantime, smartphone-based AR will be the primary method for most of the world to engage with AR content.

How motion tracking happens in AR Core?

Whether it’s happening on a smartphone or inside a standalone headset, every AR app is intended to show convincing virtual objects. One of the most important things that systems like AR Core do is motion tracking. AR platforms need to know when you move. The general technology behind this is called Simultaneous Localization and Mapping or SLAM. This is the process by which technologies like robots and smartphones analyze, understand, and orient themselves to the physical world.

  • SLAM processes require data collecting hardware like cameras, depth sensors, light sensors, gyroscopes, and accelerometers. AR Core uses all of these to create an understanding of your environment and uses that information to correctly render augmented experiences by detecting planes and feature points to set appropriate anchors.
  • In particular, AR Core uses a process called Concurrent Odometry and Mapping or COM. That might sound complex, but basically, COM tells a smartphone where it’s located in space in relationship to the world around it. It does this by capturing visually distinct features in your environment. These are called feature points. Feature points can be the edge of a chair, a light switch on a wall, the corner of a rug, or anything else that is likely to stay visible and consistently placed in your environment. Any high-contrast visual conserve as a feature point. This means that vases, plates, cups, wood textures, wallpaper design, statues, and other common elements could all work as potential feature points.
  • AR Core combined, it’s new awareness of feature points with the inertial data, all the information about your movement, from your smartphone. Many smartphones in existence today have gyroscopes for measuring the phones angle and accelerometers for measuring the phones speed. Together, feature points in inertial data work together to help AR Core determine your phones pose.
  • Pose means any object’s position and orientation to the world around it. Now that AR Core knows the pose of your phone, it knows where it needs to place the digital assets to seem logical in your environment. Remember, virtual objects need to have a place and be at the right scale as you walk around them.

For example, the lion needs to have its feet on the ground to create the illusion that it is standing there, rather than floating in space.

Source

How AR Core understands Environment?

Environmental understanding, is AR Core’s process for seeing, processing and using information about the physical world around an AR device.

The process begins with feature points. The same feature points used for motion tracking. ARCore uses your phone’s camera to capture clusters of feature points along a surface. To create what’s known as a plane.

Source

Plane finding is the term for ARCore’s ability to detect and generate flat surfaces. AR Core’s awareness of those planes, is what allows it to properly place and adjust 3D assets in physical space, such as on the floor or on a table otherwise objects would just float. This process enables you to do things like, see how a plant would look on your desk or place a virtual human in front of you for a conversation. Once you know where the planes in the world are, you can hit test or re-cast to see what plane the user is tapping on. This allows you to place objects on top of the floor or on your desk making them follow the same rules of physics as real solid objects.

  • Gyroscopes and accelerometers combined with your smartphone’s camera and the ARCore’s unique software, all add up to the discovery and detection of planes. This ability is unique to smartphone powered AR, as it requires the use of all those internal components that are already built into the system. This is how ARCore addresses the issue of context awareness.
Source
  • So, what is Context awareness?

The most important component of realism, Context Awareness, is the most difficult to achieve. As we discussed earlier, AR hardware has to be aware of essentially every single object in its environment. It needs to understand that there is a desk, a chair, and a table next to a bookcase, a vase, and a television. It needs to know which of these items is taller, shorter, fatter or wider than the others, and how this changes when the subject moves around in space. That’s a lot to track. Generating this awareness quickly and without a drop in the digital objects fidelity, smoothness or functionality is one of the biggest challenges facing AR creators today. Companies like Google are investing in software tools like AR Core to help address some of these issues.

  • Multi-plane detection and spatial mapping

ARCore is able to take multiple surface areas such as the table, sofa, and the floor all at the same time, if desired assets can be placed on any of the surfaces and each has the same anchoring and posing capabilities to keep the objects behaving realistically. Notice how the hit test against different planes gives accurate 3D poses for the assets that had the graphic system to render them at different sizes and depth from the camera’s perspective.

Source
  • For example, you can see that the object farthest away look smaller than the closer ones. Since AR Core is constantly learning from the environment, the longer you’re using your phone to spatially map the environment, the better the pose is understood.
  • For example, placing a hand in the scene close to the camera may cause AR Core to map the planes to your hand. This will cause issues as soon as you move your hand because AR Core assumes planes are not moving. Digital objects are typically projected to be a few feet away from you, so putting your hand in front of them will just highlight the lack of occlusion and might confuse the system. In general, it’s best not to place an object until the room has been sufficiently mapped and statics of faces have been defined.
  • How simple surfaces challenge AR?

AR is constantly trying to find interesting and distinct feature points to see, track, and remember for orienting the device and digital assets in augmented experience. This means that the software will have more trouble with something plane like a white wooden coffee table than it would with a knotty wooden table with a coffee mug on it. Distinct texture is important for providing the contrast needed to create feature points. To put it simply, the more differentiation there is on surfaces and a given space, the better AR apps will function.

How to place and position assets?

There are a few basic rules that augmented reality developers need to remember about the way objects behave in AR. These behaviors are the key to merging the real and digital worlds seamlessly. The first of these behaviors is placing of assets.

  • Stationary AR objects need to stick to one point in a given environment. This can be something concrete such as a wall, floor, or ceiling, or it could be suspended somewhere in mid air. Whatever the case, placing means these objects stay where they’re positioned. Even when users are in motion. The mug on your coffee table doesn’t jump around when you move your head. If you look away, it’s right there when you look back again. For AR to maintain the illusion of reality, digital objects need to behave in the same way real ones do.
Source
  • Solid augmented assets

AR objects need to appear solid. This may sound obvious, but it takes conscious effort to achieve. As you engage with and ultimately create AR content, keep this in mind that AR objects should never overlap with real-world objects nor should they appear to be floating in thin air if they’re not something like an airplane or balloon. If either of these missteps is in your app, it will break immersion for your users.

User interaction: hit-testing and pose

Now let’s talk about hit-testing. Hit-testing lets you establish a pose when watching objects and is the next step in AR Core user process after feature tracking and clean finding. Hit-testing works by drawing a line out from the phone and moving outward in that direction until it hits the plane. When it establishes this connection it then allows AR Core to establish position and orientation or both for digital objects.

Why scaling and sizing is important for virtual assets?

In addition to sticking wherever they are placed in the real world, AR objects need to be able to scale.

Source

Let’s think about it like this. When a car is coming toward you from a distance, it starts out small and gets bigger. A painting viewed from the side looks very different when you walk around and face it head on. Our physical distance from a given object and our orientation around it changes how they appear to us. A well-constructed AR experience will incorporate objects that are not only appropriately placed, but will look different if you stand right next to it, below it, above it, or view it from afar. So, this is called scaling.

Together, placing and scaling are what take AR objects from digital novelties, to assets that could potentially replace real world counterparts.

How important is environmental conditions(lightning) for AR Core?

  • Light estimation

Have you ever noticed how your phone screen automatically dims or brightens depending on where you are standing. It happens because many smartphones have a light sensor. Light sensors allow for features on a phone like brightness management and automatic screen lock when the phone rises to your ear. Current AR technology only allows you to make a global estimate of the lighting, such as brightness, color and temperature.

The way AR Core uses light estimation is by scanning the camera images pixels to determine an average of incoming light which helps to decide how to provide the perfect lightning for an AR object inside of a specific environment. Light and shadows are a big part which helps your eyes to visualize that an object is real. If you’ve ever been able to tell that actors in a film are standing in front of a green screen rather than in a depicted environment, that’s because the lighting is incorrectly matched. Wide estimation is yet another way AR core allows users to create more believable AR apps, games and experiences.

  • Lighting for increased realism

Just like a real world object, objects in AR need to respond to different patterns of lighting to make sense in our minds. The colors, shading, and shadows cast by these objects all need to behave properly both in the initial lighting of a scene and in the case of a lighting change.

For example, if you dim the lights during an AR experience, then the AR objects should change in color and shading appearance. Similarly, if you move an object around, the shadows needs to move accordingly just like they would in real life.

  • How low-light conditions limit AR?

AR is still a new and emerging technology, particularly for smartphones. There’s a lot that the AR Core platform can do, but let’s take a moment to learn about some of its current limitations.

So just like your own eyes, AR needs light to see. In order to find those feature points and planes, your phone’s camera and other sensors need to be able to grab a good picture of what’s actually located in the world around you. This means that in dim or dark environments, most Augmented Reality devices will struggle to properly understand your environment, and therefore be unable to properly render the experience.

Low light conditions are a problem for every AR tracking system that exists today. The key to these technologies is their ability to orient themselves by seeing and understanding the real world. To do that the real world needs to be well lit and visible. This will require advancements in camera technologies, and computer vision.

AR’s technical constraints: size, power, heat

  • The technical details of what makes AR challenging from a technical standpoint are complex but they can be boiled down to three simple words; size, power, and heat. We’ve come a long way when it comes to miniaturized processors and graphics cards, but we’re still not quite at the level we need to be to make high-end everyday AR, a well reality.
  • Rendering an AR experience takes a lot of power. Just think about how much your cell phone battery drains when you’re streaming video. Now imagine if it wasn’t merely streaming images but generating those images and doing so while also tracking every other object in a room and re-calibrating the image every time your head changes it’s position or rotates.
  • Again, another solution is to have an external battery pack that clips to your belt. That might be working for now, but ultimately, the power problem is one we’re going to have to overcome within the frame of the device if we want AR to reach its full potential. You’ve probably noticed that every PC or laptop you’ve ever owned as a fan inside. Computing generates heat, a lot of it. In fact, the more power used, the more heat that gets generated, and the smaller the device, the slower it gets rid of that heat.
  • AR is a highly complex process, and therefore generates a lot of heat and that heat in turn can slow down processors or even short them out altogether. Managing this heat is even harder with the limited size and structural requirements of an AR headset. There isn’t much room for heat sinks and fans in a device that’s supposed to eventually light-weighted and look like an ordinary pair of glasses. Some AR headset manufacturers are packing all of the processing power into the frames of the visor itself. Meanwhile, other manufacturers are using external hardware like battery packs to address the issue.

Computer vision limitations

The final stumbling block is Computer Vision which AR needs to overcome on its path to maturity and it is also the most challenging part to be solved.

  • Computer Vision is the term for the hardware, software, and processes that allow computers to see and understand the physical world. For example, you might be able to search Google for dogs and find dogs, but that’s because Google’s unique search algorithms and tools have categorizes images as dogs.
  • However, Computer Vision processes would actually allow search engines to see the pictures they are searching and recognize a dog on their own but that is a simple example with huge implications. Computers that can see, can recognize pedestrians and stop signs to make sure autonomous cars work the way they’re supposed to and they can look at the world around you to place digital assets where they would naturally belong. This is an amazing concept but it is also very difficult to achieve. Currently, Computer Vision is a fast growing but limited technology. Giving computers the ability to recognize the full catalog of earthly objects at any time of day and segment them into useful groups, just isn’t something we’ve completely pulled off yet.

Occlusion

Occlusion refers to what happens when an image or object is blocked by another. Let me explain you this, move your hand in front of your face, so you have just occluded the computer screen with your hand. However, imagine if you moved your hand in front of your face and the screen was still visible, you’d probably get a little concerned.

AR objects have to play by the rules of occlusion if we want them to seem real. This means that AR hardware has to not only understand where the object is in the room but also its relative distance from the user compared with any other objects physical or digital.

Occlusion means hiding virtual objects behind other virtual objects, and ones in the real world.

Say you move behind a wall while using AR and if you still see the AR objects, it will break your sense of immersion. Seamless occlusion requires constant re-calibration as users can move in any direction at any given moment, that is why it’s one of the trickiest aspects of building successful AR content.

Source
  • Occlusion between virtual assets

Let’s think what happens when you place an object into the scene. The asset placed behind the first one is currently blocked from you. Remember, Occlusion occurs whenever one solid object moves in front of another. Creating occlusion is essential for establishing AR realism. Even as you move your phone around the environment, the assets will continue to behave the way we would expect them to, if they were real. However, for now this occlusion is only possible between digital objects. The real-world object would not block the watching object. A good work around for this current limitation would be to design experiences that real world objects are not expected to occlude your watching ones.

  • Constraints of occlusion and shading

Well, another thing to remember is that currently, AR Core cannot occlude digital objects when real-world ones block them from view. This means that even if your character should technically appear behind a desk, they’ll instead float in front of it. The dream of AR is for this type of occlusion to occur naturally but in the meantime, it’s important to know the limitation in order to find creative workarounds. Speaking of creative workarounds, another feature AR Core does not currently support is shadows. The good news is they are supported in 3D game engines like Unity, which allows mobile AR developers to create more realistic content for AR Core.

Now let’s get familiar with Anchors

Once AR Core has analyzed your surrounding and placed planes and reference points where they belong, you’ll be able to set anchors for your AR objects.

Anchors also referred to as anchor points which are points in your environment that AR Core knows should always hold the respective digital object. This applies specifically to static digital objects.

  • For example, say you want to place a digital lamp on a table. You would set the anchor to be on top of the table which AR Core has already discovered and recognized as a horizontal plane. Now, once that lamp is placed, it will stay where you’ve put it and respond the way they should to your movements and orientation. If you turn around, the lamp stay on the table. If you turn back, it will still be there waiting for you.
  • For objects that are meant to move around in space, such as an airplane or a helicopter, anchoring like we described for the lamp wouldn’t apply. Anchor points are hard to pull off for AR platforms because setting them requires all of the plane finding, motion tracking, and computer vision systems. These points separate top quality AR systems from those that simply project digital objects onto the feed from your phone’s camera. The reason they’re needed is that the motion tracking is not perfect because as you walk around error referred to as drift accumulates and the devices pose may not reflect where you actually are. Anchors allow the underlying system to correct that error by indicating which points are important.

Placing with anchor points

Since the virtual camera and the phone knows it’s position and orientation in space when you move your phone around the abductor placed via anchor points. This anchoring is what ensures that the 3D asset remain in place and behave the way an object actually would if it were resting at that point in physical space. With anchor points, AR Core enables you to walk around the space without diminishing the assets realism. Anchor points are the key to creating rapidly realistic AR experiences.

Here is the link to know more about AR Core Anchors in detail:-

Cloud anchors for shared AR

Source

Anchors are the mechanism by which you can attach virtual content to a trackable real-world point. So, build on this concept and supported by the cloud, Cloud Anchors are a cross-platform feature that allow both iOS and Android device users to share the same AR experience despite using different underlying AR technologies where traditionally anchors have been isolated to one device. These anchors can be shared by multiple different devices simultaneously.

This makes AR possible for groups of people rather than just individuals, allowing for shared, collaborative experiences like redecorating your home, playing games and making art in 3D space together.

Source
Source

“ Cloud Anchors are critical to the growth of AR as a computing platform, because so much of AR is dependent on sharing digital information with other people in the real world.”

Here’s how users set up cloud anchors for shared Augmented Reality(AR)

1. First of all host a native anchor to the cloud.

Source : Screenshot from AR Core’s Coursera presentation slides

2. Secondly create a Cloud Anchor (requires Cloud Anchor ID).

Source : Screenshot from AR Core’s Coursera presentation slides

3. Thirdly share the Cloud Anchor ID.

Source : Screenshot from AR Core’s Coursera presentation slides

4. Fourth resolve a Cloud Anchor with the ID genrated.

Source : Screenshot from AR Core’s Coursera presentation slides

5. In fifth case Cloud Anchor should be resolved (i.e., the devices are synced).

Source : Screenshot from AR Core’s Coursera presentation slides

With the help of Cloud Anchors established, users can actually utilize AR experiences in totally new ways whether through education, gaming, shopping, or creative experiences.

For more information, visit the announcement from Google I/O 2018 here:

AR announcements from Google I/O 2018

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -

Applications of AR in Prospective Areas

Now let’s have a look on how AR can be effective in different domains.

AR for shopping and retail

Let us begin with shopping. Shopping integrates seamlessly with the real world, AR has proven to be a powerful medium for shopping and retail. AR can let you try on a watch, a shirt or a new shade of makeup, all without ever leaving your home. But the top use case for AR shopping right now is furniture.

Source
  • According to a few major studies, furniture is the most popular component people want to shop for with AR. It’s tough to know if a $1,000 couch will be the right one for you, if you aren’t sure it’ll fit right in your living room or match your curtains. AR’s unique ability to work within the world around us, makes it easy to integrate digital versions of furniture right inside your house. Unlike 2D images, which force you to imagine the object in your home, AR makes full use of 3D space. Letting you see the furniture at the exact size and dimensions of real life.
Source

AR for business

When it comes to business, AR core enables experiences for many different kinds of professional organizations. Warehouses can build helpful navigations and instructions for workers. Architecture firms can display designs in 3D space. Retailers can give customers novel ways to engage with products, and advertisers can reach consumers with immersive campaigns. This is just the tip of the iceberg. The power of AR is that it lets users build relationships with brands inside their physical spaces rather than trapped behind screens.

Source

AR for social media

One of the most obvious uses for AR is social media and social sharing. Snapchat was the first media platform to fully embrace AR. The platform introduced lenses in 2015, building off the popularity of filters which allowed users to digitally augment and manipulate faces including that famous rainbow-vomiting lens. Though many of them may not have thought of it this way, it is a lightweight expression of AR. Facebook also followed the lead with its own AR camera-fixed platform. Google is working to develop an intuitive platform for creating content for social AR through AR stickers. Stickers allow users to import animated and interactive 3D objects for use in social media.

Source

AR for gaming

Although the usability is still being fully explored, AR devices and smartphones are able to pull of some truly one-of-a-kind entertainment experiences.

In 2016, Pokemon Go became the first viral AR game. Since then, we’ve seen the launch of similar games from mainstream franchises like Harry Potter, The Walking Dead, and Ghost busters, and we’ll see many more in the coming years but location based adventure games like these are only one possible game application for AR. It’s also worth noting that while Pokemon Go was popular, it also wasn’t full AR as it just overlaid flat 2D images on the real world. Creatures in that initial launch of Pokemon Go didn’t behave the way they would if they were actually in your physical space instead they floated on your screen and only looked properly embedded in the environment if you lined them up with the ground.

Source

AR for education

The demonstration of complex subjects(such as biology) is another one of AR’s greatest capabilities, allowing learners to engage with spatial content visualized right in front of them. To that end, Google launched an AR application for education.

Source
Source
  • Expeditions AR is an educational experience designed to help teachers show students information with simple and engaging AR visuals. For example, students can explore a strand of DNA, etc.
  • Spatial learning allows students to engage with 3D content directly, rather than having to imagine it while reading a textbook. Google’s Expeditions AR is an educational experience designed to help teachers show students information with augmented reality and education is even a broader category than the classroom. AR will allow us to create training environments where we can measure and incorporate information in real time.

AR for healthcare

AR is already used in medicine, and that will only increase as the technology matures. Doctors and nurses are using AR’s enhanced visualization capabilities to more successfully diagnose patients, plan procedures, and execute treatment plans. 3D visuals offers much more for these doctors to learn in comparison to 2D visuals. AR could one day will replace traditional charts or guide surgeons through complex operations one step at a time. Medical science is one of the most exciting areas for AR to impact both today and tomorrow. The one thing all of these arenas have in common is their need for new and better AR content. That’s where tools like AR Core and future creators like you come into play.

Source

That’ss all about the basics. If you made it till the end, thank you for for sticking around for so long 🙂 . Please feel free to use the comments section to suggest modifications and/or additions. Finally I would like to convey my heartfelt thanks to Google for creating this wonderful project and opening it up to the developers like me to build our dreams.

This is first blog from the series of AR Blogs that I am planning to write down in upcoming days. Next coming up is on WebAR(including a awesome ARCore application).

--

--

--

Software Development Engineer at Gridraster Inc. | Mixed Reality | Augmented Reality | Artificial Intelligence | Computer Vision | www.manoramajha.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Manorama Jha

Manorama Jha

Software Development Engineer at Gridraster Inc. | Mixed Reality | Augmented Reality | Artificial Intelligence | Computer Vision | www.manoramajha.com

More from Medium

Introducing Version 1.0 of the Trueface SDK

Create a Google Maps Live View-like application with Unity3D in under 10 minutes!

From Spark AR to 8th Wall

Animate 3D Characters Using WebCams and MediaPipe — Part 2 [Hands Tracking]