<- all articles

Creating a virtual reality nostalgia simulator

Sam Hendrickx

Suppose you want to revisit your childhood neighborhood. Times have changed though, and so has your district. There goes your nostalgia… However, maybe you could revisit your neighborhood in its original state just like you remember, using Virtual Reality goggles. You describe what your environment looked like et voilà, it appears in front of you, waiting for you to relive it.

In pursuit of finding a VR thesis for our engineering degree, in which Jan Hardy and I were both very interested, we came across Craftworkz. They were really down for offering such thesis, so the deal was sealed. We sat together a couple of times to discuss some ideas and form the research question that eventually became: “Determine whether or not it’s possible to combine certain speech and VR technology to create an application that allows users to create small environments.”

In the cool new office of Craftworkz, we started building a prototype for the application, with focus on the elderly who want to be taken back to their childhood neighborhood from the good old days, or for whom it might be difficult to walk around. At the end of our thesis, we created a functional proof-of-concept in Unity using the HTC Vive incorporating several APIs.

How it works

The entire pipeline of the prototype
The entire pipeline of the prototype

In this prototype, we opted for a modular design, as can be seen in the picture. Thanks to this design, each phase in the application can easily be adapted and extended to create a more realistic environment without having to change the entire application.

The current prototype consists of 3 main stages:
1) Speech to JSON
2) JSON to 3D objects and commands
3) Exploring and adjusting the built VR world in Unity

Speech to JSON

When a user is describing the environment using speech, this speech should firstly be transcribed to a written text. This is the responsibility of the Google Speech API. To filter this transcript on relevant keywords, we make use of MeaningCloud’s Topics Extraction API. This API applies a dictionary we defined on the website of MeaningCloud, listing all keywords we thought would be used most often. The keywords obtained from this API are then used to create a JSON string that represents the object or action the user was describing. Currently, the only objects supported are houses and streets. These houses are very primitive objects built from basic blocks with a certain color.

First phase in more detail
First phase in more detail

JSON to a 3D object or command

These JSON strings are then converted to WorldObjects which are C# objects we defined ourselves. The use of these WorldObjects allows for easier adjustments of the information contained in such object. After this conversion, these WorldObjects serve as input for object generators to execute a command or generate and spawn an object that matches the one described. Since this application is modular and designed with an eye on future adaptations, it is possible that in the future this JSON string will come from elsewhere. In this case, these JSON strings might contain information that the object generators will not understand, or might be built in a different structure. To ensure that all information in the JSON string is correct and well structured, extra checks are done during the conversion from JSON to WorldObject. All checks on these strings are executed making use of simple .txt files in which users can change which information they do or do not allow.

Second phase in more detail
Second phase in more detail

These .txt files also make it possible to insert certain default data in WorldObjects, should this data not be provided by the user. For example, a user can simply say “there was a house.” How can the object generator responsible for building houses know how many floors this house should have? All this default information can be put in these .txt files and will be inserted in the WorldObject during the conversion from JSON string. The house depicted above consists of 5 floors and has a blue pointy roof. All his data is coming straight from these .txt files as it was not specified by the user. It is possible to further define some details of the building, after which it will automatically change to match the new -more specific- description.

Example of house on a street
Example of house on a street

Exploring and adjusting the built Virtual Reality world in Unity

After having described objects, these will appear in the VR world. This VR world is explorable using the HTC Vive. Changing the VR world is possible either by using the Vive controllers and some menus attached to it, or by using the GUI on the PC. Rotating, translating and deleting are the currently supported actions.

To build and explore the VR world, two different interfaces are provided. The first interface is a room which shows the built world in scale model on a table. Besides this scale model, several other objects that might be helpful building the environment are placed in the room. The second interface is the world built by the user in full scale. This can give the user more a sense of being immersed in this world.
 The built worlds can be saved to and loaded from save files that can also be used in VR using floppy disks and a virtual PC placed in this room.

Menus on the Vive controllers
Menus on the Vive controllers

Conclusion

In short, this application allows user to build environments to feel nostalgic. Currently it is not realistic, but adaptations and further extensions can easily be built to increase the feeling of immersion.

During this thesis, we discovered lots of new technologies and learned more about developing VR applications. We would like to thank Craftworkz for their support and services during all these months.

The room interface
The room interface

Written by

Sam Hendrickx

Want to know more?

Related articles