At no time since the Industrial revolution has the world looked more like a science fiction landscape that it does right now in the era of self-driving cars, exoskeletons, and missions to Mars. The futuristic dreams of the tech industry have created a newfound optimism for what The Next Big Thing will be: Perhaps we’ll all be watching movies through virtual reality while talking with our hologram friends as robots do our chores for us.
And the architects of this future certainly hope so. Google has heavily invested in robotics and artificial intelligence, while Amazon has similar hopes of being the personal assistant we’ve all dreamed of. Even Microsoft has joined the newer tech firms in unleashing wish-fulfilling projects such as Cortana or HoloLens.
Whether we’ll one day have a computer that can accurately simulate the human mind—or even surpass it—is largely a question of time, not ability.
Whether these dreams can become a reality is largely an answered question. Technological development is not only accelerating—the rate at which is accelerating is itself accelerating. Whether we’ll one day have a computer that can accurately simulate the human mind—or even surpass it—is largely a question of time, not ability. However, the most ambitious projects of Silicon Valley may be forgetting that the largest hurdles for any major development might just be the humans surrounding it.
This is the lesson Google learned when it had to temporarily shelve Glass, its smart eyeglasses project. Despite positive reviews from critics and even praise for its application in the workplace, Glass suffered some key mistakes when trying to fit in with our own culture. The device has largely been seen as nerdy at best and invasive at worst, with many businesses banning Glass. Despite its futuristic mission and fanciful technology, Glass failed to grasp some core assumptions about consumers.
As others have pointed out, this bares a striking resemblance to the telescreens of George Orwell’s 1984, projectors that watch and listen to every piece of activity their owners conduct. However, Samsung is far from the first company to face this problem: Microsoft was forced to pull away from their “always-on” Kinect for the Xbox One, and even Amazon’s Siri competitor Echo has policies very similar to that of Samsung’s.
The concept of voice activation has long been the dream of techies and nerds alike, but herein we see the problems the future presents to ourselves. While most consumers are likely not even aware of the privacy dangers such innovations present, the industry as a whole is so reliant upon the economy of data they may not be able to bring us such tech without seriously endangering any expectation of privacy.
One in five millennials—supposedly the most tech-savvy and secular generation in history—believe vaccines cause autism.
We see a similar problem among technological advancements in the medical field. Nanotechnology, for example, stands to massively change the way doctors and patients understand illness and treatment, enabling doctors to analyze internal problems with greater care and precision and arming patients with better data to change their own lifestyle and live healthier—call it the ultimate culmination of “the Quantified Self.”
However, you first have to convince patients it’s a good idea to allow a doctor to inject microscopic cameras, sensors, and even robots into your bloodstream. Doctors are having a hard enough time convincing people vaccines work. In fact, one in five millennials—supposedly the most tech-savvy and secular generation in history—believe vaccines cause autism. If this strong of a rumor can brew around debunked claims about a 50-year-old vaccine, what rumors will rise when doctors want to inject robots into you?
And while the skepticism of consumers presents a danger, so does their complacency. The lack of security in many of the most ambitious entries into the Internet of Things presents a growing danger to every consumer and most have likely no idea. As increasing amounts of the control of products is taken from consumers and put into the trust of the devices themselves, the more complacent consumers will be to risks of their own data.
This is an issue nowhere larger than it is in the growing smart car movement. According to a report compiled by Senator Ed Markey this week, only 2 two of 16 car manufacturers had any real plan to prevent the infiltration by hackers of a car’s onboard computer. As Wi-Fi and Bluetooth become major components of cars’ drivetrains, the risk of hackers gaining access to a car’s brakes or acceleration is a growing risk—one the industry isn’t ready to handle.
While completely an avoidable problem, the lack of security in most cars means the pace at which a truly autonomous vehicle can come to market is either too fast or must be stalled—driverless cars are expected to hit the road in just two years. The future of driverless taxis might be presenting itself to us right now, but it doesn’t seem the world can be trusted to handle it.
The only thing holding us back from embracing technologies that could replace us is ourselves.
Along with self-driving cars, the growth of artificial intelligence has shown itself as the most popular harbinger of the future. From HAL 9000 to Rosie from The Jetsons, a truly sentient machine is an iconic set piece for dreams of the future. However, we might find ourselves ill-equipped to handle the dangers such an innovation could present—both Elon Musk and Bill Gates have warned of the dangers AI could present.
Through every major announcement from Silicon Valley, we learn of the deficiencies within our culture technology wants to fulfill. However, with each new gadget or mind-blowing innovation we find something to fear or, far worse, something we should fear which we don’t. As millionaires and billionaires cloistered in California attempt to change the world into a science-fiction story, the only thing holding us back from embracing technologies that could replace us is ourselves.