It's time to stop being afraid of new technology
We're on the cusp of a major change in the way our gadgets do things. Artificial intelligence and machine learning are no longer something you would see in a science fiction novel, and smart machines are being deployed to do even the most mundane tasks, as well as more high-profile things that catch our attention. While I think we're still at least a few years away from the point where we all have our own robotic butlers and flying cars, the possibilities are no longer in doubt.
Along with the breakthroughs that enable machines to make real decisions comes an inherent fear of the consequences. Some are valid, many are silly, but every one of them makes for a great headline. Whether reporting that Elon Musk's billion-dollar crusade to stop the A.I. Apocalypse (a real headline) or reminding us how everyone is one breath away from stealing our identity, reporters and publications need to provide both sides of every issue and point us towards resources where we can learn more. Doing neither makes us unnecessarily suspicious of the tech breakthroughs that will be a part of our future.
I'm going to pick on the iPhone X today. Before anyone gets upset, I'll tell you my impression of the iPhone X without ever having touched one — too bad the cool stuff it can do came from Apple first, because I really don't want to use an iPhone every day. It's an iPhone in the Essential Phone's body with some excellent tech at the top that can do some really interesting things. If you like the iOS ecosystem, it seems like it's the phone you want to buy. And because of the fascination with all things Apple, it's getting the lion's share of attention by the western press. That might be a good thing for other companies though, as much of the press surrounding the things that make it special isn't necessarily the good kind.
Two recent articles stand out about today's new smart tech, how it's used by Apple, and why it's something to be concerned about, but I'm sure there are countless others. In October, Wired talked about how machine learning "COULD SURFACE YOUR IPHONE'S SECRETS" (yes, in all caps) and Reuters told us how facial recognition "spooks" privacy experts. Both need a very critical eye when reading.
Rene Ritchie did an excellent job discussing the problems with Wired's article which basically claims that machine learning can find your nude photos and do something nefarious with them, but I still need to point out a bit of text from the article itself.
Essentially, Apple's Core ML system (their machine learning algorithms and the hardware that can process the data) cannot do anything that any other app isn't able to do. Even if you tell the system to root out photos that appear to be of naked people, it can't do anything with them if it finds any. Yet the article and it's alarmist title is there for everyone to see.
Reuters poses the premise that security researchers are afraid of what Apple's facial recognition means for our data privacy. Specifically, that a third-party developer can somehow use the data from the iPhone X's camera in ways that intrude into our lives or even use the data as identification credentials. It's good that security researchers and privacy advocates worry about these things. That's what they are supposed to be doing. It's not as good when Reuters doesn't explain what data is shared with third parties and what can be done with it once they let us know that the ACLU is taking a close look.
Be an expert in 5 minutes
Get the latest news from Android Central, your trusted companion in the world of Android
This isn't an Apple problem even though it's their product in the spotlight. We've all seen or read about the things Google can do with their advanced machine learning algorithms, whether that means making a better camera and gallery to take and view your photos or diagnosing disease earlier so treatment can begin when it's most needed. But machine learning plays a big part of things we wouldn't associate with tech, like disposable pens or tomatoes.
Entire industries already use machines that make rudimentary decisions and will be deploying even smarter ones as they are developed. Many products you use (or even eat!) every day were processed through an automation line that manufactured, sorted and inspected them using cameras and smart computer systems. Then they were packaged using machines that knew what size box to use based on what was dumped into a hopper and put on the right pallet so they could be delivered by the right equipment to the right loading dock.
Concern about what even more advancement might mean for unemployment is something that laymen should be discussing, but inherent safety and privacy concerns are best left to the experts until actual problems are found. Sensationalism at this stage will only lead to regulations enacted by people wholly unqualified. Imagine your senator or member of parliament trying to dissect Tensorflow or Cloud ML and find ways to "protect" us from them.
We need highly qualified people to look long and hard at machines that can think. We also need responsible reporting on what those researchers have to say instead of clickbait. Remember, every headline you can see is also one that members of the United States Senate Judiciary Subcommittee on Privacy, Technology and the Law can see. It's very important that all of us get the facts without the hyperbole. Let's not kill the next big thing before it gets off the ground.
Space X photograph courtesy of Pushkr - https://www.flickr.com/photos/pushkargujar/23791728242/, Creative Commons 2.0
Jerry is an amateur woodworker and struggling shade tree mechanic. There's nothing he can't take apart, but many things he can't reassemble. You'll find him writing and speaking his loud opinion on Android Central and occasionally on Threads.