Showing posts with label videogames. Show all posts
Showing posts with label videogames. Show all posts

VR Blink Detection

In December 2015 I was invited to Granada Gaming, a Video-games festival held in my home town, to talk about VR and my interaction experiments. Very exciting times!

I had to give two talks: the first one was oriented to all the professionals (coders, artists, journalists) where I explained some of the decisions I took while creating Apnea (my always in-progress  videogame). The second talk was for a general public and for this one I wanted to talk about something that seems to concern a lot of people: VR limitation and why FPS won't work very well at the beginning.

I won't cover the whole talk here as many of the interaction experiments showcased can be found already in the "VR Wireless" post and my github page, but I created something I thing is a cool hack to solve one of the main trends in VR movement: the blink transition.

Blink transition


Moving the FPS way in VR with a gamepad causes nausea to pretty much 45% of the players (nDreams CEO dixit) so companies are coming up with a lot of creative solutions to workaround this problem.
One of the main solutions is called blink transition, and it has been popularised by great experiences such as Epic's Bullet Train and GearVR game Land's end.

With this solution you basically look at the point where you want to move and just press a button that will teleport you there. This transition sometimes happen by closing and opening virtual eyes in front of the user or simply lerping him there really fast.

While it's true that this technique usually does not cause any nausea it is a big presence breaker for me: looking at a point and pressing a button to suddenly be there is not a very natural way of moving... how could I improve it?

Blink detection


What if I could detect if the user is truly blinking? Instead of closing some fake eyes in front of the player and having to press any button it will feel like a super power, think about NightCrawler from the X-men. This will suddenly make the movement system way more natural and amazing, improving the presence factor a lot while being more enjoyable and comfortable.

Some experimental HMDs are starting to support eye tracking, the most notorious one being the FOVE HMD: if you can track the user's eye it has to be trivial to detect if it's closed. But all I had in my hands was a GearVR for Galaxy Note 4, some hacking was needed.

I realised that the GearVR has some covers in the front to prevent scratching the phone when attached. If you remove any of this plastic-covers.. voilá! there is a screw!

Front facing you, the top right screw just happens to be exactly inlined with the Note 4's front camera! I inspected the Galay S6 GearVR and seems to be the same case. -What a lucky coincidence!- I thought - if only I could use this camera to track the user's eyes...

Then a new idea came to my mind: I don't need to track the eye, I just want to know if it is opened. When you look at someone's eyes they happen to reflect a lot of light, but when you close them your skin is not near as reflective.

If I create a very bright scene my eye will reflect a lot of light and maybe some will reach the front facing camera. But that was not the case, the camera (even thought I removed the screw right in front of it) was still too far away and angled to read this subtle amount of light. A visit to the store will solve this: 10 cm (2£) of optical-fibre: If I put one end in the screw's hole facing directly the camera and the other end facing the eye I can redirect the light from the screen -> to the eye -> to the cable -> to the camera!
The camera end of the cable
The eye end




All that rested was to create an ultra-simple script that will read the amount of light received: just add the value of all pixels and check if it's higher than a threshold. Thanks to Unity and Android I can control the resolution and discovered than using a 20x20 texture was good enough so the calculation was lightning fast... even detecting the fastest blink. For obvious reasons I set that the eye had to be closed for around 500ms or the user will be travelling surely more than desired.


Here is a video. The black square is the camera input, you can see the amount of light received (multiplied by a very very big factor) and how it becomes pretty much black when I close my eyes:



My last problem was with Android 4.4.4 and the camera sensitivity. It seems that they added some enhancements in Android 5, but with my version I could not control the shutter, so sometimes it was not sensitive enough to read the light. The solution i quite lame, just sightly remove the phone from the HMD (while keeping the USB connected) so a lot of light comes inside the camera this will adjust the shutter from you and everything will be fine!

This was so simple/cheap to do  (and so useful!!) that I would love future HMDs to come at least with some sort of blinking detection. VR movement sure is an interesting problem and when deciding between presence or not-sickening every little can help.


Hack 'n Slash


October 2014 came fast and I was ready for another HackManchester after having a blast the previous year. But this time having to work +60 hours in a week made me take the decision of doing something much simpler so I could get some sleep.

At this point I was starting to experiment with an idea for what later will become Apnea, my first ever commercial/experimental game (still in the making... but more on this on another post). One of the key features of Apnea was the detection of the user steps using the HMD's accelerometer and another one was the detection of the user breath with the microphone. Soon I realized I had a problem: every time the user walked very strange signals appeared in the breath detector, quite odd! I fixed those problems much later, but by that time I decided what if I make a small interactive game out of this odd behaviour?

The idea was named Hack 'n Slash (thanks Tom!) and it basically is a fencing game using sticks and smart-watches!
In order to play you will need:

  • 2 Android smart-watches (I used t LG ones)
  • 2 Android smartphones (I used a Moto G and a Nexus 7)
  • 2 players
  • 2 swords (anything from wooden stick to foam sword)

The rules themselves are pretty simple, each player as 5 lives and every time they are hit by the other player in the body they will lose one. When a successful hit is landed the players will have 2 seconds to go back into the initial position before starting fencing again.
But how does it work?

Hit landing


Detecting that a sword is hitting an object is not a trivial task, a first naive approach would use the smart-watch accelerometer to detect that the user arm has suddenly stopped/changed course , but this would generate a lot of false positives as the swords are moving really fast in the air causing a lot of noise in the acceleration signal. Here is where the Apnea problem comes into play.

When the sword actually hits something it will micro-vibrate and this vibration will get translated to the user hand and wrist. If only we had a reliable way to read vibrations... but we do! The microphone in the smart-watch is an air-vibration detector that we can "easily" convert into a "wrist-microvibration detector".

  1. Easy - Tape the microphone hole and maybe put some blu-tack on top of the tape so no air can come in at all.
  2. Easier - Wear the smart-watch, preferably on top of the big bone of your wrist.
  3. Not so easy - now we need to read the microphone data and identify the microvibrations of the wrist. If the sword hits something the bones will vibrate and the isolated microphone will catch up some noise. Since the bones are solid we are interested just in low frequency vibration, using the Fourier transform over the noise signal and reading just the lower values (using an empirical threshold) we can determine that a hit has landed! This can be further improved by mixing this data with the accelerometer peaks.

Hit synchronization


This was pretty much my second try with the Android Wear APIs, and I do hope they have improved them since. In order to detect if the sword is hitting the meat I needed 1 smartphone paired to each watch, O hope that by now you can use one phone to connect several watches or, even better do pairing watch 2 watch.

When player 2  hits something successfully the watch will inform his paired phone using the Google APIs and this phone will tell, using sockets, the player 1 phone (the server). Player 1 watch obviously communicates directly with the server-phone.

The server will then calculate if both players have registered a hit roughly at the same time (around 100ms difference) this will indicate either a draw: both player hit each other at the same time, or a clash: one sword hit the other. If none of these situations happened then the player that did not send a hit on time will lose a live and the system will close the communications for 2 seconds so the players can go back to the initial position.

Socket communication was fast enough and was adding just a 10ms delay... but the Google system to communicate watch2phone was incredibly slow (around 300 ms). How could I detect simultaneous hits?
The solution for this in the end was quite simple: after successfully pairing watches and phones both players will clash their swords to start the match. This, apart from looking cool as some sort of fighting ritual, will allow the program to measure the starting time difference between the two players. Then when a hit is send they will add a time-stamp taking in count the time difference measured at the beginning. The result was great: A sword clashing will look like 2 simultaneous hits just 5ms apart or so!

The code


The code for Hack 'n Slash can be found in my GitHub. Please keep in mind this was done in 2014 when Android Wear was quite young so I am not sure if it will still stand.



The Wireless VR Experience

After leaving my VR Gun project for a while I decided to go to HackManchester 2013 and give it the push it deserves by creating not just the Gun, but a full VR experience. In 25 hours I managed to finish the weapon and modify an existing game named Angry Bots to be playable with all the freedom of a wireless system!

 I won the "Best Company Project" award from the jury, and also was the finalist (2nd) for the "Best of All" competition. A true honour that motivated me onto polishing and showcasing the project for the Manchester's Android Meetup months later with huge success :-)

Ok, so what is this exactly? Basically it is a set of applications to detect the user's movements to control and display a FPS version of the mentioned game. The user experiences a 3D environment that allows him not only to look around using stereoscopic vision, but walk, jump. crouch, aim... a full VR experience; and most importantly: in Real Time, without any cables and ultra-light, perfect for feeling deeply immersed.

All code can be found in my Github, I will explain here the key parts of the project, which are summarised in the presentation I used or the Android Meetup and can be found here.

The Gun:

The gun used is obviously my VR Gun. I coded the Arduino board  so I could actually track the on/off positions (only on first-call) for the trigger and the ammo clip modifying the original's Makey Makey code.

Through an OTG cable, the board is connected to a Galaxy S3 attached to the Gun itself running a very particular app. The app, inside DataStreamer folder, will listen to the Arduino output and also track the pose. The phone then has all the important information related to the weapon and can send it to the server (game). But not only that! Because the phone is in contact with the gun and knows when the trigger is pressed I also implemented some haptic feedback so, when the user fires the gun vibrates with a nice machine-gun rhythm.



Choosing the right phone is not as easy as it seems:

  • It needs to have not only OTG support but to be able to give it a 5V input.
  • The pose-detection relays strongly on the gyroscope sensor and, nowadays, it is quite difficult to find information about how good a phone's gyroscope is. I tried my best to correct any drifting using a version of this code, that brings accelerometer and compass to the mix in order to create a rock-solid pose reading, but can still be problematic in medium/low-end phones. For that reason I included a huge Sync button in the middle of the screen so it is impossible to miss it while playing and will realing the head, hip and head poses.


Galaxy S3 works wonderfully but still there are some scenarios where the user will have to hit the button every 5 minutes or so... until I code a proper solution (already found one but it has not been implemented yet, more in the last paragraph), and the 5V requested from the OTG makes the battery drain quite quicly (1h-2h).


The Movement:


For the movement I used a different phone running a pedometer, also bundled in the DataStreamer app, I created for my old Augmented Reality system. The important thing about this pedometer is that it not only listens to the strength of the steps but the rhythm so it is very resistant against noise, it could even run directly on the helmet! But instead I decided the user will put it in his back pocket, this way it can track the hip orientation and even the butt inclination.... this way the user could walk towards his hip and not his head and even crouch on real time or even lay down in the ground.

Because, as mentioned in the previous point, not all phones have a RT-compliant gyroscope, I decided to put a toggle button to disable the hip-butt pose detection and use the head tracking instead in case the phone is not powerful enough to keep the pose updated without breaking the immersion.


The Helmet:


This is the most important part. I created a helmet out of foam during the Hackaton (that later was substituted by a more professional-looking black helmet) that holds a 7" tabled (Nexus 7) and 2 lenses to the user's face. The code running here is in the VRUnity folder and contains a Unity3D project. It is the  Angry Bots demo game modified in two ways:

  1. The player has been removed and the camera replaced, after importing the Oculus Rift SDK, with a stereoscopic set of cameras that render the view with the perfect oval shape for the lenses and also track the head's pose very fast. Since OR is PC only, I had to modify the code a bit so it won't silently crash on my device. Specifically I commented out all the code related to DeviceControllerManager and the DeviceImposter.
  2. I included a communication system to allow the DataStreamer app to send data to the game. More in the next point.


The biggest task here was not only finding what was not working with the OR SDK (it was not crashing, it was actually working but my communication system was not.. and it was its fault) but also creating the helmet. There are a few small details to have in count:


  • It has to be closed and dark so the light does not distract the eyes.
  • The distance eyes-lenses-screen is very important and varies depending on the user. I ended up creating a couple of rails in my last helmet so it was adjustable.
  • Breathing is an issue. There has to be an aperture for the nose or the screen/lenses will be steamed in a few seconds.
  • It has to be light, but still avoid any kind of wobbling. 

The first version was not the best; but after many tries, super-glue, cutting and breaking elastic bands I built a black helmet following all those guidelines.


The Communication:


The way everything works together is thanks to some UDP magic. The DataStreamer apps will bundle the information in Datagram packages and send it to the server (game). Once the game receive the package, it has to parse it and redirect it to the relevant Gameobjects that will apply the information.

The key for having RT here was to use a port for each possible message (fire mode, hip pose, steps, gun pose, etc.) so there is a file defining a set of offsets for the port and each listener will apply it when sending the datagram. On the server side there is one thread running per port, each of these threads is looking for exactly one type of message and as soon as it is received it is processed.

What's next:

There are a few bits I am still not happy with and I will try to solve at some point.

  • I would love to take advantage of the current 3D printers and get a more professional helmet done. I am already talking to a few people and this might be happening soon! In that case I will put the model here.
  • Gun drifting. As I mentioned, after a few minutes the gun pose might have drifted a bit. If the tablet in the head had a camera I could set a few LEDs on the gun's nose, flickering with different patterns, and then track it directly from the game. This will even allow to move the gun in a complete 3D space (when in view). I still want the user to be able to fire backwards, in that case the normal gyroscope reading can be used and when it comes back to the front view the drifting (if any) can be corrected again.
  • Jump! I already did a few experiments, but it will be interesting to detect jumps using the pedometer code. So far the results make me feel very dizzy after a couple of tries.
  • Use a more professional gun, maybe modify an electronic air-soft gun so the weight and controls feel more real.

Applications for the AR system



One of the advantages of the AR system developed is the ability to couple the logic of a video game. At the same time I was coding the system, my friend Carlos Torija was designing a video game, he created the artificial intelligences and logic and then we both added simple graphics with openGL and gave it some AR. In this game you have to evade/attack an evil drone that follows and tries to kill you. The game has been produced to be played in a open space and it has virtual walls! Next step includes map recognition.




I also started another AR game using my system. I planned to release it for bada 2.0 but Samsung keeps delaying it so the game is unfinished. The game is an augmented stunt kite simulator, at the moment it has really simple physics and fixed wind but I plan to add a wind system using weather forecast and advanced physics in order to perform realistic tricks.


Note: Pink artifacts appears when taking a screenshot and are not present in the real application. 

NDS experiments

I know that Nintendo DS is old hardware now, but back in my time it was awesome! One day I discovered PAlib for NDS and I decided to investigate. I came up with some weird ideas, from a back-scratching game to a portable version of the "Shelters of Catan" board game (now it do exists for NDS but it is not my unfinished-experimental version). I also started to program a time-based multiplayer gymkhana game for various teams so they can play in a specific forest, and the clues had to be solved using the NDS.


These projects taught me how hard is to create a game without a real graphical artist in the team, but I also found that seeing your results in a video-console transmits and amazing feeling.

Note: the scratching game is about...that. With difficulty from stinting to herpes and a punctuation system!

The home-made VR system.

During my holidays I was growing quite fat due to inactivity and I were spending to much time playing Minecraft. Here at home I have got a dance mat for DDR and those rhythm games but I don't really like them, so I decided to create something funny, healthy and a bit geek: a home-made virtual reality system!

I found GlovePIE, an input emulator where you can create scripts to remap almost any controller, from dance mats to wiimotes. And yes, I have got both of them. I created my first script for Minecraft where you can walk using the dance mat (1 step in real world == 1 step in Minecraft) in a quite realistic way (it is not about pressing one single button, but walking in a natural way) and also jump (the character jumps when you release both feet from the ground so it is almost 1:1). For digging and gather wood you have to shake the Wiimote as if you were using a pickaxe and for putting blocks you have to move the Nunchuk.


Then I started to play Skyrim and so I gained weight again, that's why I remade it! Now it is possible to detect when the user is sprinting, and the script also implements different controls for each weapon: when you are using a sword you need to swing the Wiimote horizontally to perform a light attack and vertically for a strong attack; you can get some cover with your shield raising the Nunchuk and shake it from here to push things with it; you have to raise your hands for controlling spells and, most important, you can use your voice. GlovePIE can easily detect voice commands and in Skyrim this means that it is not only possible to make simple orders like "save the game" or "equip the bow" but you can also shout! So if you want the character to shout the spell "FUS ROH DAH" all you need is to shout it!
I forgot to show how to hit things with the bow shaking the Nunchuk :-(, also voice commands are in spanish.

Who needs Kinect?


Minekhaan

Minecraft is an impressive and extremely funny game, and I really love it. So when I decided to start learning OpenGL this game was my model: a simple, but huge, game.



MineKhaan is just an experiment programmed from scratch using OpenGL and C to create custom worlds using big cubes. You can walk and jump, create or erase cubes with different textures and  moreover you can select figures (a tree, a house or some grass) and copy-paste/store-save them! So you can construct your world in a faster and friendlier way!