This breaks a lot of my rules. The connection is probably mostly emotional. For most of the people in the event, there is little to interact with (other than as an observer). In fact, I wonder if there is even a technical connection between the game screen and the launch, or if someone just watches the screen and pulls the trigger on the launcher.
This might be something as simple as a real life mock-up of a video game, but WOW does it win for fun factor! I guess in the end, fun trumps all.
I understand why they are grouped together, but the interactive nature of chumby and Karotz bring them closer into the realm that I’m thinking of.
What is fun about the Karotz is that its “display” is not a screen. The “display” is ears that move, a speaker, and lights inside the body of the rabbit. Unfortunately, I believe the users ability to physically interact with the device is limited to passive interactions only, such as watching or listening.
As for the chumby, a user can interact with it both passively and actively, but both the passive and active interactions are limited to the screen any maybe a few buttons, or what we are already familiar with for our devices (such as computers, phones, and pads).
I don’t think that interactive connected reality™ needs to preclude the use of a screen or buttons, as they may be the best or only way to achieve certain kinds of interactions, but I want to think about growing our set of interactions back into ones that are truly more natural.
I’ll end this post with a quote from Craig Yoho (one of my staff). “Physics games and engines are very popular right now. With this project, we can take advantage of the best physics engine ever invented… physics.”
With the recent proliferation of Arduino and other inexpensive microcontrollers, I feel that we will see more and more people connecting things to the internet.
Pachube is a great example of a service that has allowed people to easily share and use vast amounts of shared data (which is being automatically entered by devices created by every day people around the world). A very interesting and timely example is the realtime crowd-sourced radiation monitoring in Japan.
I chose Interactive as the first word of the phrase for emphasis. I really want to spend time thinking about and playing with what exactly that means. There are many permutations. Here are some:
Real life interaction with a real thing
Real life interaction with a real thing (that can influence an online/virtual thing)
Real life interaction with a real thing (that can influence another connected real thing)
Online interaction with a real thing
Online interaction with an online/virtual thing (that can influence a real thing)
Online observation of a real thing
A real life thing maintaining/representing the state of an online thing or interaction
An online/virtual thing maintaining/representing the state of an online/virtual thing or interaction
This list can probably be better abstracted. I guess I have a “to do” there. Then again, perhaps someone has already done this. It sounds very user interface, and countless hours of research have already gone into that field. Maybe I can get some help or thoughts from someone who has studied UI.
Again, this entire subject is already very obvious to some, or has been explored thoroughly by others in the past, perhaps there are new variables, or a new environment. I see an inevitable and clear path to ubiquitous Interactive Connected Reality™, and I want to help make it happen.
My current task is to focus on all three words and to make fun stuff/prototypes that make it become a reality. See my following posts for more thoughts as I flush this out more.
Their mention of “patented technology” is kind of a turn off for me. Also, they mention the concept of ACID. That strikes me as an important concept for more than just for databases. I need to read more about SaaS (Software as a Service), as I’m assuming there is something similar.