On the Use of AI – the Dependency Dilemma

ai_300x200Jeff Robbins, an IEEE Life Member and active IEEE Society on Social Implications of Technology member for over 30 years, admits he pursued engineering for surprising reasons: he first studied the subject “because the three older boys next door did,” and later enrolled in graduate school at the University of New Mexico due to another twist of fate, when his car broke down in Albuquerque. “Of course, it’s not so toss of the coin as that,” he says, citing a passion for “understanding how things worked, the universe included.” A problem identifier and solver from an early age, Robbins conducted experiments with his friends: “Long before solar energy, long before climate change was on the world’s tongues, a fellow student and I came up with a solar energy project that generated electricity — we also cooked a hot dog using a parabolic antenna we bought in New York City’s Chinatown.” While choosing between attending graduate school and continuing research engineering work on a computer simulation of thermal stratification of the liquid hydrogen fueling the second stage of the Apollo Saturn V rocket, a car breakdown between Los Angeles and NASA’s Manned Spacecraft Center in Houston sealed the deal.

Before taking his current position teaching research courses on science and technology in the Writing Program at Rutgers University, Robbins taught at The New School and worked professionally on automatic testing of electronics (ATE) in high performance aircraft, automotive electronics, especially Ford’s antitheft system, and the certification of undersea fiber optic cable systems for Tycom. Most recently, Robbins’ research interests have “stem[med] from an ongoing concern for the too-often swept aside bite-backs of rising technical order,” which has led him to moderate panels and give talks on such intriguing topics as the future of artificial intelligence, computers, and robotics; GPS navigation dependency; media’s increasing role in childhood and adolescence; and the impact of the transition to digital high definition television.

Below, Robbins discusses his recent article, “When Smart is Not: Technology and Michio Kaku’s ‘The Future of the Mind’”, published in IEEE Technology & Society Magazine, which discusses the ethical implications of increased dependence on new technologies, and whether we ultimately benefit from their pervasive use.  

Note: This interview has been edited and condensed by humans.

What is the greatest advantage of dependence on artificial intelligence (AI)?

For me, the greatest advantage of dependence on AI accrues to the engineers, scientists, venture capitalists, startups, and mega-corporations creating dependence on AI. For sure there are, and continue to be, benefits on the user side of the equation as embedded technical intelligence enables the doing of things, the creating of things, the grasping of things, [which was] never possible before. Those benefits are the result of our using technology to free up effort for more fruitful application, not from looking to technology to do the work for us. These benefits stem from what John Markoff, in his latest book, Machines of Loving Grace, calls IA: “intelligent augmentation” as distinguished from AI. The term, AI, Markoff writes, coined in 1964 by math and computer scientist, John McCarthy, was designed to “mimic” or replace human capabilities. “Today,” according to Markoff, “the engineers who are designing the artificial intelligence-based programs and robots will have tremendous influence over how we will use them. As computer systems are woven more deeply into the fabric of everyday life, the tension between augmentation and artificial intelligence has become increasingly salient.” [1] He adds that there is also a paradox: “[The] same technologies that extend the intellectual powers of humans can displace them as well.” [2]

What is the greatest disadvantage of dependence on AI?

The greatest disadvantage of dependence on AI is dependence on AI.

In our ongoing, accelerating master / slave affair with AI and machine-learning-churned technology, we face the increasing prospect of a “losing-it-for-not-using-it” future. As we become more and more dependent on technology to do what we once had to do for ourselves, we, as Nicholas Carr warns in The Glass Cage: Automation and Us, are losing our skills through loss of the need to use. We are losing our physical health through loss of the need to exercise … We are losing our vital face to face social skills to, amongst other means, the always at and in hand smartphone.

We offload the work onto AI / machine-learning-enabled soft and hard robot slaves that will do so that we don’t have to do. But, as the power in the technology does more and more, what will happen to our brains and bodies when the need to pay attention, when the need to remember, when the need to move is swept away by technology that does the attending, the remembering, the moving?

What is the most likely danger of dependence on AI?

Moshe Vardi, the Karen Ostrum Distinguished Service Professor in Computational Engineering at Rice University, sees more and more jobs succumbing to the inroads of automation in both the developed and developing worlds. In an IEEE Spectrum podcast conversation, Vardi tells Steven Cherry,  “I believe that it’s going to be quaint in another generation [to] talk about driving your own car.” But what happens to the drivers of trucks and buses and taxies when trucks, buses and taxies begin to drive themselves? In supermarkets more and more customers check out themselves. Loading shelves, restocking shelves, taking inventory, tracking sales, tracking customers, all will be done by ever more artificially brilliant robots. Warehouse operations are already heading towards full automation. “[As] both AI and computing power and robotics all continue to progress,” says Vardi, “we will see bigger and bigger swaths of jobs just being taken over by robots. Most of whatever’s left of farming work will “gradually [be] eaten away by automated machinery, with GPS, with self-driving machinery.”

In a reflective step back from his 1980s profile of MIT’s Media Lab, Whole Earth Catalog founder, Stewart Brand, mused that “New technologies create new freedoms and new dependencies. The freedoms are more evident at first. The dependencies may never become evident, which makes them all the worse, because it takes a crisis to discover them”. [3] That observation was made more than a quarter century ago – and it seems that little to nothing has changed. Echoing Brand’s observation, Vardi tells Cherry that “We run away with technology and deal with the consequences later”.

In your opinion, what is the greatest differentiator between human intelligence and artificial intelligence?

As I see it, the greatest differentiator between human intelligence and artificial intelligence is the infinitely rich “analog” means by which it grows from birth. Every human being’s intelligence, or lack of it, evolves from the unique merger of genetic predisposition and living experience. Human intelligence has an organic tie-in to the multi-billion-year “wisdom” of evolving life. The seeds of evolving artificial intelligence, at least at the outset, are channeled along lines that its enablers consider intelligent, with vested interests pulling the strings backstage, capitalizing on our conventional wisdom regarding the aforementioned rarely examined equation of doing more for us equals better for us.

As with the proliferation of Internet of [profitable] Things aimed at making life easier, more convenient, more efficient, more commercially exploitable, one suspects that the real motivation for the race in AI / machine learning advance is “killer app” profit and the reality-based fear, both military and commercial, that if you don’t someone else will. The trend of ever increasing replacement of human effort, physical, mental, and face-to-face social, with technology… is the erosion of human skill, engagement, attention, patience, persistence, and motivation, making it ever easier for artificial intelligence to replace human intelligence downstream, and not necessarily for our better.

How has the progression of this dependence changed in the last ten years? the last five?

In his books, and on his website, “The Law of Accelerating Returns,” Ray Kurzweil proclaims that technology is advancing at a doubly exponential pace as the exponent itself continues to escalate. Ten years ago there was no iPhone, no apps. Now the iPhone is virtually glued to human hands as our dependency on it for more and more continues to grow. John Markoff captures the trend and the worry as we cede “individual control over everyday decisions to a cluster of ever more sophisticated algorithms.” He cites the observation of Silicon Valley venture capitalist, Randy Komisar, who after pondering Google’s SIRI competitor, Google Now, that “‘people are dying to have an intelligence tell them what they should be doing…What food they should be eating, what people they should be meeting, what parties they should be going to.” Paraphrasing Komisar, Markoff writes that “For today’s younger generation, the world has been turned upside down…Rather than using computers to free them up to think big thoughts, develop close relationships, and exercise their individuality and creativity and freedom, young people were suddenly so starved for direction that they were willing to give up that responsibility to an artificial intelligence in the cloud. What started out as Internet technologies that made it possible for individuals to share preferences efficiently has rapidly transformed into a growing array of algorithms that increasingly dictate those preferences.” [4]

Are there ethical implications to this?

Absolutely, and mainly in their neglect. Are those who are developing and marketing technologies, and in particular AI and machine learning technologies, concerned about what they are actually doing to the human brains that are becoming increasingly dependent on them? Would the developers want what they are doing to others done to themselves or their children? It is well known that significant numbers of Silicon Valley elite parents send their children to “tech doesn’t belong in early education” schools.

If ethics, at its most basic level, is the golden rule, don’t do unto others what you wouldn’t want done to you (or your loved ones) what advancing technology’s much celebrated elite innovators who send their own young children to “no screens at all” schools are doing is unethical in spades.

You write that the takeaway to the beginning half of this article is “conventional wisdom.” What would you describe as the inverse of conventional wisdom?

The “conventional wisdom” on our ongoing, intensifying, dependency escalating affairs with technology’s “accelerating returns” is that the more technology does for us the better off we are. Technical progress equals human progress.

The inverse of conventional wisdom asks: “But is it good?” If the “smart pill” [described in the T&S article] substitutes for the efforts our brains demand to sustainably develop the neural circuits that enhance cognition, those circuits will not get developed. The result will be diminished, not enhanced cognition rendering us more and more dependent on smart pills.

Relying on smart pills to do the mental work mirrors the loss of the need for internal mental navigation effort thanks to our increased dependency on GPS. That dependency not only dissipates our ability to find our way without the technology, but failure to form cognitive maps may, in fact, shrink the hippocampus creating an increased risk of dementia.

It bears repeating that technology, as I see it, really does do things for us, it does free us up to do and be and realize what we couldn’t do and be and realize without it. But, as I mentioned at the outset, it is the sweeping aside of the dark edge of technology’s double-edged sword that worries me. Someone who possesses, call it, unconventional wisdom regarding technology realizes that there is always a toll to the genuine benefits that technology affords, and that realizing those benefits demands sustained vigilant use.

Can you further differentiate between the “science and technology elite” and those who “will fall into the trap of dissipative dependency”?

The science and technology elite use technology smartly to translate effort while most users fall into the trap of dissipative dependency by looking to technology to do the work. The elite rise up with the technology. Rising technical order becomes, for them, rising human order. The elite have real power. Those who identify with the power in what they possess, in their smartphones, in the apps, have the illusion of power because with a click, with a flick, with a press, with a scroll, what they want done gets done. But all they did was click, flick, press, scroll.

With the inevitable pervasiveness of tech applications, how can the latter train themselves to be more responsible users?

Though it seems to have vanished from public awareness, if it ever was in public awareness, there was, for a time, a movement for what was called “media literacy.” Media literacy, its advocates proclaimed, should be integrated into the nation’s school systems with the aid of giving children and adolescents the means to step backstage from the incessant barrage and assess not only what it is doing for them, but what the ever increasing power doing it all for them is doing to them.

You write, “The sensation of power users feel with technology that does the work for them is an illusion.” Can you expand upon this and give a further example?

The sensation of power is an illusion because the power resides in the technology, in the device, in the software, in the hardware, in the makers, the innovators, the financiers, the corporations, not, as aforementioned, in the user. “System 1” [the “fast, habit driven, shortcut seeking, convenience loving, brain” as described in the article] equates the possession of technical power with our power. “System 2” [the “deliberative, more energy demanding, critical thinking brain”] realizes that just because a piece of technology does what we want done, just because the GPS in our smartphone gets us where we want to go, turn by turn, while we remain clueless as to how we got where we want to go, or where we are should the system break down, does not mean that the power to find our way resides in us.

 

[1]  J. Markoff, Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, CCC / Harper Collins, New York, 2015, p. 343.

[2]  Ibid., p. x11.

[3]  S. Brand, The Media Lab: Inventing the Future at M.I.T., Penguin Books, New York, 1988, pp. 226-227.

[4]  Markoff, op cit., pp. 341-342.

Comments

  • There is an analogy to the AI question of dependency–A/C.

    Fifty years ago we had attic fans and ceiling fans in our homes. We sat on the porch to avoid the summer heat in the house. Car interiors were cooled by rolling down the windows. Motel signs beckoned with “It’s cool inside” signs to attract customers, as did movie theaters.

    Today we are air conditioned at home, on the road and at work. When the a/c breaks down there’s a sense of urgency to get it fixed.

    Are you kidding? Of course there will be dependency on AI.

Leave a Reply

Your email address will not be published. Required fields are marked *