THANK YOU FOR SUBSCRIBING
Seven Trends in AR
By Marco Marchesi, CTO, Happy Finish Global
Context awareness. We are seeing more and more AR apps and tools that are context-aware. Deep learning techniques are the most used to get an insight of what the frame captured from camera contains. So, object detection, segmentation or image-to-image translation (personally my favorite one, recognize the reality and transform it) are commonly the most used models. If AR is able to keep memory of what is around us by tracking the visual features that appear on camera, Deep Learning techniques guess what is in the scene. From a simple description of the objects in front of us, to a more sophisticated interaction between a virtual avatar and your laptop display, the opportunities are infinite.
Location awareness. When Niantic released Pokemon Go, it was clear that location-based AR apps had to be “a thing” at some point. But placing gyms and items to collect was just the first step. In the meantime, other companies were working under the radar to map the real world and build a virtual one on top of it. Recently, a London-based startup called Scape released its SDK that allow users to place AR assets that persistently stay anchored on the buildings located in the city areas scanned in 100 cities around the world. Snapchat introduced “Landmarkers”, so the users are able to see famous monuments, from Tour Eiffel to Buckingham Palace, augmented in the most creative ways.
Remote rendering. The idea of relying on remote rendering capabilities to give superpowers to the AR mobile content is not new but, over the years, it faced the constraints of poor network bandwidth, limited hardware and significant latency.
From a simple description of the objects in front of us, to a more sophisticated interaction between a virtual avatar and your laptop display, the opportunities are infinite
With the advent of 5G and edge computing, the concept is coming to reality and the first demos of real-time rendering performed on a server machine and visualized remotely on a mobile device have been published. First example, Microsoft that introduces Remote rendering as one of its Azure ecosystem options, but I expect many more cloud rendering services to come really soon, along with cloud software or platforms (for example, Stadia).
Faster networks. Strictly related to the previous point, Edge Computing will make possible to achieve challenging ideas where speed and reliability are not an option. Let’s imagine how difficult may be to deliver remote rendering in a stadium to 10,000 users during a sports event or a concert. Beside this, 5G aims to be the natural partner of Edge Computing in deliver fast, richer and reliable experiences. Its adoption rate will be key in defining how much server-based solutions will be successful, compared to existing locally managed technologies. For example, currently body tracking can be performed on a mobile phone by running locally a light AI model. As the mobile computational power increases, and some companies like Apple or Huawei have introduced their own dedicated AI chips on phone, Deep Learning architectures become deeper and more computationally expensive, requiring more battery, memory and processing resources. 5G will make possible to run such architectures remotely in real-time without any noticeable latency.
Virtual try-On. Advances in Deep Learning have allowed to achieve more accurate body tracking results that can run in real-time on mobile devices. Retail industry, particularly sports and fashion, will take advantage of the opportunity for the users to try clothes and accessories on their mobile phones or AR mirrors in the stores, making shopping a question of “try before you buy”, and customization and visual effects will dominate the experience. On top of that, with virtual try-on apps we can expect a reduction of returns (why should I order something that I do not like how it fits virtually?) and consequently of CO2 emissions caused by transportation.
Lens. For a while, AR lenses and filter effects seemed just a playful alternative to more sophisticated SLAM-based AR frameworks that were running natively on mobile. Furthermore, content creation suffered by the limitations of the filters platforms in terms of file size and number of vertices. With faster networks, we will see higher quality assets and new functionalities will be introduced, along face tracking, body tracking (of pets too!) and gesture recognition.