We are quickly approaching the finish line to our latest iDGi-1 upgrade, and I thought that maybe it was time I started talking about the control scheme we are building using the Source engine. As I stated in a previous post, we are using the Source game engine in order to properly display and allow interactivity, with what you experience on the other side.
Unfortunately, even if we make a stable connection right this very second, were still looking at several months of engine development before we can allow you, the public, unrestricted access.
That brings us to one element I'm eager to share with you. We have designed and are nearly ready to begin testing a fairly detailed system which will allow you (the "user") to communicate, as the host, with anyone encountered on the other side.
The physical method for communication is actually much simpler than you would imagine. We are being given nearly unrestricted access to the higher brain functions of a human being. Along with such access comes the potential for manipulation over the motor cortex region, where all speech movements which generate sound are made. This part is easy enough, as all speech is technically created through muscle movements and does not necessarily require input or complex brain usage. But how do we know what the user would like to make the host say? The simple answer is that we do not know. Unfortunately, we cant read minds (yet!). BUT we are attempting the next best thing by constructing an algorithm within iDGi-1 that will allow options for speech, whenever appropriate.
When we began putting this together, it was important to us that the host we are connecting to appear as "normal" as possible to the people we will be communicating with. Originally we imagined a sort of question system whereby we could make the host ask people questions appropriately chosen by iDGi-1 to match the situation.
But then we realized we don't want the host to appear like a robot! We needed a system whereby the host could be made to go several levels deeper in conversation. If someone walks up to the host and says, "hello there, how are you?" We should be able to choose an appropriate response, right? But then what? Theres no guessing how a real human being will react to whatever choice you make. It was extremely important that our system be capable of adapting to conversational change. It took Dr. Schelter and our team many weeks of intense design work to come up with a potential solution: iDGi-1 will be able to continually give updated choices on your HUD, real time, as the host speaks with people on the other side. It hears what is being spoken to the host and generates potential replies accordingly. Preliminary tests are still on-going and we wont know for sure that it works until we've broken through the veil and can go live with testing... but the ultimate goal is to allow the user to hold fully realistic conversations with people on the other side of the rift.
To start with, iDGi-1 will allow a choice of 1-4 possible responses. The user will also always have the choice to say nothing. It was important that there be allowed an option to say nothing at all if the user so wishes, as people will not always agree with what the satellite chooses for them to say. And lets face it; sometimes the best thing to say is to say nothing at all.
Lastly, I should mention that everything discussed here will be fully upgradeable in the future. As we continue to progress through time on the other side, we will also in-turn be learning more and can continually upgrade our satellite software to ensure maximum immersion. Not only can we upgrade the Source engine to increase the visual fidelity of the representation of the other world “ but also further upgrades to the system could mean 1-6 choices of speech instead of 1-4 (for example!).
Vidal is working on some new hardware that NASA has agreed to help send up, assuming we are successful with our upcoming 3D forays through the rift.rnrnI guess thats all for now.
The subject of my next post will be about the "Save/Load" system we are building and how its treatment is wildly different from any other game out there. You'll never look at saving and loading the same way again!
-G.A.M.
Share on
The Tower
Consortium: THE TOWER development update. V1.13 release notes. EXPANDED DEMO now includes all of Act 1. Updated level design, music, more. This project is in "Development Hell". Learn more...
Read MoreConsortium
You can now play chess at the chessboard in the lounge. Important fixes. FREE standalone Info Console app coming, VR/Remastered soundtrack coming... Revision notes for v1.1.0.
Read More