Artificial Intelligence And ‘Real Understanding’

Speculation about–and vigorous contestations of–the possibility of ever realizing artificial intelligence have been stuck in a dreary groove ever since the Dartmouth conference: wildly optimistic predictions about the temporal proximity of the day machines (and the programs they run) will attain human levels of intelligence; followed by skeptical critique and taunting reminders of landmarks and benchmarks not attained; triumphant announcement of once-Holy Grails attained (playing chess, winning Jeopardy, driving a car, take your pick); rejoinders these were not especially holy or unattainable to begin with; accusations of goal-post moving; reminders again, of ‘quintessentially human’ benchmarks not attained; rejoinders of ‘just you wait’; and so on. Ad nauseam doesn’t begin to describe it.

Gary Marcusskepticism about artificial intelligence is thus following a well-worn path. And like many of those who have traveled this path he relies on easy puncturing of over-inflated pretension, and pointing out the real ‘challenging problems like understanding natural language.’ And this latest ability–[to] “read a newspaper as well as a human can”–unsurprisingly, is what ‘true AI’ should aspire to. There is always some ability that characterizes real, I mean really real, AI. All else is but ersatz, mere aspiration, a missing of the mark, an approximation. This ability is essential to our reckoning of a being as intelligent.

Because this debate’s contours are so well-worn, my response is also drearily familiar. If those who design and build artificial intelligence are to be accused to over-simplification, then those critiquing AI rely too much on mysterianism. On closer look, the statement “the deep-learning system doesn’t really understand anything” treats “understanding” as some kind of mysterious monolithic composite, whereas as Marcus has himself indirectly noted elsewhere it consists of a variety of visibly manifested  linguistic competencies. (Their discreteness, obviously, makes them more amenable to piecemeal solution; the technical challenge of integrating them into the same system still remains.)

‘Understanding’ often does a great deal of work for those who would dismiss the natural language competencies of artificial intelligence: “the program summarized the story for me but it didn’t really understand anything.” Or in running together two views of AI–wholesale emulation, including structure and implementation of human cognitive architecture, or just successful emulation of task competence. As an interlocutor of mine once noted during a symposium on my book A Legal Theory for Autonomous Artificial Agents:

Statistical and probability-based machine-learning models (often combined with logical-knowledge based rules about the world) often produce high-quality and effective results (not quite up to the par of nuanced human translators at this point), without any assertion that the computers are engaging in profound understanding with the underlying “meaning” of the translated sentences or employing processes whose analytical abilities approach human-level cognition.

My response then was:

What is this ‘profound understanding’ that we speak of? Turns out that when we want to cash out the meaning of this term we seek refuge again in complex, inter-related displays of understanding: He showed me he understood the book by writing about  it; or he showed me he understood the language because he did what I asked him to do; he understands the language because he affirms certain implications and rejects others….

[D]o I simply show by usage and deployment of a language within a particular language-using community that I understand the meanings of the sentences of that language?….If an artificial agent is so proficient, then why deny it the capacity for understanding meanings? Why isn’t understanding the meaning of a sentence understood as a multiply-realizable capacity?

Marcus is right to concentrate on competence in particular natural language tasks but he needs to disdain a) a reductive view that takes one particular set of competences to be characteristic of something as poorly defined as human intelligence and b) to not disdain the attainment of these competencies on grounds of their not emulating human cognitive structure.

5 thoughts on “Artificial Intelligence And ‘Real Understanding’

  1. HI Samir.

    I think we have reached a point in which AI can be considered an evolving form of intelligence in its own. Just like the intelligence of a chipanzee, it is not human intelligence, it shares some observable tracts with it, and it is very well deserving to be studied, and thus further evolved.

    []s

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: