Imagine being surrounded by technologies, and hardly being aware of them. For instance, a person walks into a room and without doing anything, the entire atmosphere is fine-tuned to his or her current mood or expectations. Measurements are taken, personal data is sensed and recorded, and the room adjusts to integrate with the person’s countenance. All this occurs without turning a switch or adjusting an appliance—simply walk into the room. We’re beginning to move in this direction, with recent advances in medical technology, with personal fitness devices, and with smart home systems.
Behind the scenes, as the individual enters the room, the unseen technology helps advance the person’s security, health, comfort, and even creativity by providing a seamless set of adjustments and changes to everything from room temperature to computer access to food preparation. What’s not seen are the computers running super-high-speed algorithms, computations, and calculations processing commands and actions to create a seamless life existence for everyday needs.
Here’s a real-world scenario: The person walks in the room. The sensors identify the person, as well as the mood of the person using facial features and expressions, body temperature, and movements, including gait and posture. Additionally, the smart room can monitor an individual’s current health conditions, such as blood pressure, heart rate, breathing, and chemical composition. This is all done in real time.
Next, a robot comes out bringing you water and vitamin supplements from sensing those that are physically lacking. While sitting in a couch overlooking a virtually-generated ocean with the sound of crashing waves, the user decides to catch up on what is happening in the world and accesses the latest news by making a quick sweeping gesture in the air. The room instantaneously turns off the ocean scene and pulls up a news program. It all happens without the user needing to be fully aware of the entire process.
Such a scenario isn’t just limited to the confines of a person’s living quarters. It can apply to a city, a park, a museum, or a business. Imagine a society that’s so intelligent that the machines and computers are constantly collecting data, and learning from our actions and behaviors to make sound judgements and decisions. This human centric vision is just that, centered on the individual with technology serving to enhance, nurture, and protect, making life continually easier, healthier, and more productive. Importantly, people will retain and use the power to preprogram myriad commands and conditions suitable for their daily lifestyles and activities.
The goal is to integrate technologies into our daily lives without continually being confronted by its presence. Technology is meant to work for us and learn about people and not the other way around. Although its existence is essential, it should blur into our daily lives.
When you think about a future such as this, it’s important to look beyond the present. Innovations must be built on the premise of a human-centered design (HCD). Companies interested in serving this vision will be developing solutions by involving the human perspective in all steps of the problem-solving process. Human involvement typically occurs by observing the problem within context, brainstorming, conceptualizing, developing, and implementing the solution.
Here are some examples of technologies that will bring this vision to reality. First is a 60-GHz sensor for micro motion sensing, such as gesture recognition. The sensor can be embedded into mobile devices and can perform tasks and issue commands without having to talk or physically touch the devices. Functions can include but aren’t limited to volume control, page turning, call answering, and typing.
Also important is an image processor with computer vision and deep learning capabilities. This processor, designed for edge computing, will incorporate the kind of advanced image processing algorithms that let computers gather very high levels of understanding—even comprehension—from a wide array of digital images or videos. The advanced built-in intelligence allows for the monitoring of patterns and objects of interest with the ability to selectively detect pre-programmed shapes and objects without unnecessarily storing large amounts of data. In other words, very efficient while highly intelligent and even discriminating.
Another kind of IC technology required is a video codec that can be combined with low-power, multi-core processors as a cost-effective alternative to high-powered data centers. Because it’s costly to maintain high-performance data servers to simultaneously process large amounts of content, such solutions allow for power-efficiency when processing large videos and images. Again, intelligent, and efficient.
Realizing the HCD vision requires expanded expertise in IoT, imaging, and computing solutions such as sensors, cameras, vision systems, and edge computing. Time will tell if users, businesses, and technology developers will embrace the concept of systems that surround our lives without our immediate input or continued awareness of what they’re doing and how they respond to our requirements and requests. Despite the challenges, the benefits of HCD, and a society that makes everyone’s lives safer, smarter, more efficient and productive, can be realized.