What NOT to Do in the Augmented Reality
What not to do in AR — insights
I have to confess something. VR was the key factor that prompted me to start working with 4Experience in August 2018. However, over time it turned out that I enjoy creating AR applications much more. It was fascinating to be able to create an experience that would expand the real world. Additionally, it forced me to think out of the box and encouraged me to be creative and experiment. It gives me great satisfaction when I see my virtual objects that seamlessly blend with the world seen through the lens of the mobile camera. I would not be able to achieve this if I did not know the design guidelines with which I can create engaging and comfortable applications. Below is an example of an application, the creators of which perfectly fulfilled the task entrusted to them.
It’s simple, funny, the effect of burning ads looks great and the user gets a free burger reward. Great work.
But not all AR apps are great. Sometimes there are also those that were not created in accordance with the adopted principles and proven solutions. Unfortunately, for this reason, they have no chance to appeal to users. Below, I am posting a video promoting the application that prompted me to write this article and share my insights with you.
Everything has a purpose
The idea for PhotoStudio AR was very good in my opinion. If you want to see what the effects of a photo session in a location you have found may be, you can place a virtual model on the scanned surface. You can also modify the lighting conditions and thus get a preview of the results that you can achieve.
Let’s check how the application works in practice thanks to the review of Chelsea Northrup , who is a professional photographer and runs a very nice YouTube channel on this subject with her husband.
The author of the review admitted that she would use such an application in her work. However, the application turned out to be so problematic that she quickly gave up on this idea.
What went wrong?
Support for portrait and landscape modes
The first thing I noticed is that Chelsea is holding the mobile device horizontally, while the entire interface suggests that the app should be used in portrait mode.
I do not consider this to be a user mistake. After all, the application should allow you to take photos vertically and horizontally. Therefore, I believe that the interface should adapt to the current position of the device. However, if for some reason the use in horizontal mode is not possible, the application should inform the user about this with a tutorial that would explain how to use it.
AR view display
Another thing I would like to draw your attention to is the user interface which takes up too much of the screen.
A generally accepted rule in AR applications is to design an interface that is minimalist and does not distract from what is most important, i.e. the AR view. After all, we want the user to be as involved in the experience as possible. Most of these items could be hidden under a single button that would contain a set of options.
Separately, I would like to mention a thing that can make a large part of users discourage from the application at the beginning. Some models are available for an additional fee.
This is not a problem directly related to AR, but it applies to all types of applications. If it was decided to make the application available in a paid form (e.g. PhotoStudio AR costs $ 10 on Google Play ), it should not additionally include microtransactions in it. This solution is not fair to the user, who may just feel cheated. In my opinion, if there are payments in the mobile application, then this application should be free, contain basic functionalities and the user should be able to decide what functions he wants to extend it with.
Size of the experience
What else caught my attention is the location where Chelsea is testing the app. Pay attention to the stairs on the right, chairs and table top on the left, and the pillar directly opposite the tester. In such conditions, it will be difficult to get an immersive experience. It is possible that there was no clear information on how much space the user needs to fully enjoy the experience. Below is proof of my words, due to inadequate surroundings. The application allows you to place a virtual model in the middle of the staircase.
Plane detection visualization, placement distance, and restart option
In my opinion, the application lacks a virtual representation of the scanned surfaces. Thanks to this procedure, the user better understands what the application “sees” — what areas of the world have been recognized by it and are ready for use. In addition, such a representation improves the understanding of the depth and allows you to judge with what accuracy an object will be placed on the surface. At any time the user finds that the accuracy is not satisfactory, he should be able to repeat the surface scanning process.
I also believe that the application loses due to lack of user support in terms of choosing the optimal distance at which the model should be located. The next paragraph describes problems that could possibly have been avoided if the distance limit had been implemented.
Realism, shadow planes, and depth collisions
However, I will start with what is positive. The appearance of the model deserves recognition. You can see that it is realistic and this is very important when we try to make our AR content blend into the user’s physical environment. Probably PBR materials, normal maps, and ambient occlusion were used here.
Unluckily, there are no shadows, which makes the model seem to float in the air. It lacks a ground point. Immersion is also spoiled by the pillar in the middle of the location, which I mentioned earlier. I would also improve the default height of the model, probably it was supposed to be about 2 meters, but taking into account the previous comments, its above-average height makes it seem closer than it really is (i.e. about a meter behind the pillar).
Gestures and scale limits
While we’re talking about the model’s height, see what happened as Chelsea began to scale the model.
It is worth noting that the application has model manipulation implemented using common gestures. Thanks to this, the user can intuitively rotate (two fingers on the screen moving in the same direction) and change the size of the model (two fingers on the screen moving towards each other or in opposite directions).
However, there are no appropriate scale limits, which allows you to create an unnaturally small or large model. This again spoils all the immersion that the model is actually in the real world.
In the next fragment of the movie, we can see the reviewer trying to change the lighting of the model. Optimizing the lighting conditions in a scene is a key aspect of achieving realism. For this reason, both ARCore and ARKit offer systems that calculate lighting intensity based on the image from the camera and sensors available in the device. As a result, the virtual scene with the model changes its lighting conditions along with changes in the real world.
The application includes additional lighting settings. Unfortunately, as we can see in the video, even a professional photographer is not able to decipher how to use them. Probably due to insufficient or illegible markings. The screenshot also shows a large number of buttons and icons as well as two sliders that are hardly readable at first glance. In my opinion, this is quite an exaggeration. It certainly does not help that the interface does not adapt to the horizontal orientation of the phone.
Selection and content manipulation
Next, Chelsea tried to add more virtual objects to her photo set. It wasn’t as simple as she thought.
If you watched this part of the movie, you saw for yourself how much chaos was on the virtual set. The most important thing is to reuse the cursor to indicate the place where new objects should appear to keep the operation of the application consistent. Additionally, as you can see in the example of the motorcycle, automatic object placement does not work properly. In a later part of the video , the bike is incorrectly positioned vertically and far behind other objects. Its scale is also incorrect.
Another thing to consider is the lack of a basic tool necessary in any application that enables the manipulation of many objects — selections. The user should be able to indicate the object, he wants to edit, by touching the screen. A cursor should appear under the selected object, which would inform that this object is currently selected. During this time, the remaining objects should be static. Touching the screen again in a place that would not point to any interactive object should be interpreted as deselection of the current object. This simple but useful solution would allow us to make changes to the scene without tearing hairs out.
A nice addition would be to change the cursor under the active object depending on the gesture recognized by the application. Thanks to this, the user would be sure that he is using them correctly. In the video you can see that sometimes a cursor appears under the objects, matching the current operation. But its visibility is a mystery to me. It may be a bug that was corrected in a later version of the application.
More errors can be seen when the filmmaker tries to use one of the virtual locations.
Adding this type of location to the real-world extension application is a completely unusual practice. In the above screenshot, we can see an example of why this type of functionality is avoided. The AR view should just be hidden because it cannot be reliably connected with the virtual environment. Maybe it’s another bug and not an intended action.
I was amused by the part of the video in which Chelsea discovers that there are more models on the virtual beach.
It is possible that it was a deliberate procedure so that the user had a virtual photo set ready with one click. However, the automatic model placement failed again and there was no explanation for the user how this functionality works. It is also possible that due to the interface and automatic model placement, Chelsea unintentionally added all these models by accidentally pressing buttons. When you are focused on the camera view, such things are easy to overlook and therefore the on-screen interface should be minimal.
I think that this application is wasted potential. It was in the hands of an ideal target customer who was delighted with its idea. However, the irritation caused by the operation of the application and non-compliance with simple rules caused that the product was badly received. The photographer explicitly stated that she felt she had no control over the virtual content. In my opinion, there is too much functionality in the application. It is possible that there was not enough time to refine the basics.
The review is from 2018, so it is possible that this application works much better today. I admit I didn’t check it out. I’m a bit scared of it and a bit sorry for the $ 10 I would have to pay. I think Chelsea would also be afraid to give this app a second chance.
But seriously, let’s learn from mistakes and not repeat them. If you are interested in creating AR applications, I encourage you to visit the following two pages, where the creators of ARCore and ARKit have posted guidelines that will help you create immersive, engaging AR experiences.