THANK YOU FOR SUBSCRIBING

Dr Dave Ranyard, CEO, Dream Reality Interactive
Well, the short (and good answer) is no. There is so much more to come. For me, we are just at the beginning of the revolution. And the key missing ingredient? Artificial Intelligence (AI). Coupling immersive interaction & visual output with AI is the revolution we are all waiting for. But before we jump into that, a bit of history. My father (Dr John Ranyard) was a computer programmer in the 1960s. He worked on a computer with 1k of memory. The computer in question was housed in an old church just next to the University of Leeds. He wrote code using punched cards. He would take his program in on a Tuesday and get the results on a Thursday. This was one-dimensional programming. A linear stream of commands was entered, and a stream of results was output. And fixing a problem or bug was a weekly cycle (and by the way, back then, a bug was an insect that might have eaten or dislodged a cable!)
Cut to the 90s, and by now, I was a computer programmer. I sat at a PC screen and input code and data with a mouse and keyboard. The results were output to a 2D screen; two-dimensional computing. This input-output model for interacting with computers has pretty much stayed with us for 30 years. It can be seen in different guises, for example, a website is a 2D input form and the compute power behind it is 2D number crunching, maybe via a database. The internet has connected everything to everything else, a different revolution, but the input/2D-compute/output cycle has largely stayed the same, even with the introduction of smartphones (another revolution).
Immersive technology gives us the opportunity to tune into these kinds of communication cues with each other over the internet
But, let’s be clear, touch input is still 2D.
Back to the present, or rather the present and near future. Immersive interfaces give us the opportunity to interact with computers in full 3D. I can use very natural human interaction, gaze, gesture, one to one 3D pointing, picking and other naturalistic methods. I ask you this. Have you used an Oculus Rift with touch controllers? With this technology, we can place you in a virtual place and let you pick up objects, press buttons, fire guns, wave at other characters or “live” people. If there are multiple people in that space we can mimic emotions gestures, facial expressions and more. Already, we have a pretty good approximation for being a human being in a virtual space, and this will only get better.
Plus, I can use voice as input and output. Or, I can indicate my interest with my gaze or a gesture with my hands and arms. In fact, immersive technology allows us to be much more natural, more human in the way we input our intentions to computers; and how we interpret or understand the results. Think about how awkward a phone or Skype call is. We cannot read all those other cues we get from meeting face to face. Immersive technology gives us the opportunity to tune into these kinds of communication cues with each other over the internet. AND with virtual characters powered by AI.
Immersive technology, coupled with AI allows today’s computers to respond in a human way. We have the ability to create digital characters, who respond to our gestures, voice, eye contact, our very human nature, in a very human and natural way. By coupling immersive input, visuals and audio to AI, we can create digital humans to interact with, in a far more human way than ever before. Imagine having your very own teacher, with all the patience of a Saint. I have noticed one of my children prefers Alexa to help him with his spellings because he can ask how to spell something time and time again, without any impatient huffing on the fourth or fifth request (I feel slightly guilty confessing this). The next step is for Alexa to have a human form, understand when he is pointing at something and be able to spell that. A one on one teacher designed to work at exactly my son’s speed and capability - the opportunity for a bespoke curriculum at his preferred pace and maximise his learning. Or a narrative in which I am a cast member and the whole performance is being put on for my benefit alone. Each character telling their part of the story with me in mind and responding to my actions and questions. This is three-dimensional computing, based all around us, understanding every human gesture we make and reflecting back to us in a way we have been learning from birth - by being human. And for me, this is the next revolution.
Dream Reality Interactive have recently embarked on a hugely exciting new project, backed by Innovate UK, and in harmony with Maze Theory and Goldsmiths University. Our aim is to develop characters, all based in the compelling world of Peaky Blinders, with AI-driven non-verbal communication. For our consortium, this is one step towards the next revolution.
Weekly Brief
Read Also
Machine Learning and Model Risk Management
PETER QUELL, HEAD OF PORTFOLIO ANALYTICS FOR MARKET AND CREDIT RISK, DZ BANK
Why Engagement Still Matters in The Future of Work
Nick Lynn, Senior Director, Employee Experience, Willis Towers Watson
Rhb Leads the Way for Green Data Centres
Albert Bruno Augustine, Head, Group It Infrastructure Services and Johari Ariff Bin Osman, Head, Data Centre Facility, Rhb Banking Group
Why Ergo Shifted To An End-To-End Ai Approach
Mark Klein, Chief Digital Officer (Cdo), Ergo Group Muv2 (Etr)
Digital Ecosystems And Insurance - A Winning Partnership
Sean Ringsted, Chief Digital Officer, Chubb
Impact Of Digital Transformation On Supply Chain
Johnny Ivanyi, Global Head Of Distribution Operations, Bayer

I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info