Artificial Agents and Knowledge as Tractably Accessible, Usable Information

In commenting on my post on teaching philosophy by reading out loud in class, David Auerbach quotes Georges DreyfusThe Sound of Two Hands Clapping on the process of the education of a Tibetan monk, which includes the memorization of scriptures, supplemented by active, repeated vocalization. Dreyfus’ quote–please read Auerbach’s comment for the full quote–concludes with:

This educational process reflects the belief that knowledge needs to be immediately accessible rather than merely available. That is, scholars must have an active command of the texts that structure the curriculum, not simply the ability to retrieve information from them. Knowing where bits of information are stored is not enough: the texts must inform one’s thinking and become integrated into one’s way of looking at the world.

I find interesting resonances between this analysis of knowledge and one offered in my recent A Legal Theory for Autonomous Artificial Agents. There, in attempting to make coherent the notion of attributing knowledge to an artificial agent, we began with an intuition captured in the following example (originally due to Andy Clark in his Natural Born Cyborgs):

As I walk down the street, I am asked by a passer-by, “Excuse me, do you know the time?” I answer, “Yes,” as I reach for my cell-phone to check what time it is. The plausibility of this exchange suggests we readily attribute knowledge to ourselves and others when the relevant information is easily accessible and usable….This example is extensible to those cases when we are asked if we know a friend’s telephone number stored in our cellphone’s memory card. Or imagine someone who knows I am carrying a cellphone pointing to me and suggesting I should be asked the time: “He knows what time it is.”

The crucial bit, with respect to the Dreyfus quote above is usable. Later, building on this example, and others, to bolster our claim that “Knowledge claims speak to a bundle of capacities, functional abilities, and dispositions; their usage is intimately connected to a pragmatic semantics” we offer an analysis for artificial agents as follows:

An artificial agent X is attributed knowledge of a proposition p if and only if:
1. p is true;
2. X has ready access to the informational content of p;
3. X can make use of the informational content of p to fulfill its functional role; and,
4. X acquired access to this informational content using a reliable cognitive process.

An extended explication of this analysis is in the book; for present purposes, I’ll throw in an edited version here.

The first condition retains the intuition propositions must be true to be known. The second condition suggests an artificial agent required to conduct intractable searches of its disk or other storage, or engage in other computationally expensive procedures before being able to locate or derive a particular item, would be pushing the limit of the plausibility of such ascriptions. Moreover, there are at least two dimensions along which the ready access or what we might call the “readiness to hand” of a particular item of information can vary: the physical and the logical or computational. What is considered knowledge can therefore vary according to the strictness of the criteria to be applied along each of these dimensions.

The third condition requires the agent to be able to use the information content of p to display functional competence; an artificial agent reveals its knowledge of p through the ready availability of the proposition in facilitating the agent’s functionality; it demonstrates its knowledge by its functions.

The fourth condition requires knowledge attributed to an agent to have been acquired non-accidentally, not just dropped into its memory store by mistake or by fluke. This condition is identical to traditional reliabilist conditions.

We are thus able to conclude:

When we say, “Amazon.com knows my shipping address is X,” our analysis implies several facts about Amazon’s website agent. Firstly, the shipping address is correct. Secondly, it is readily accessible to the agent through its databases: Amazon would not be said to know my address if it was only accessible after the execution of a computationally intractable procedure. Thirdly, the shopping agent is able to make use of the informational content of the address to fulfill its functions: it is able successfully to send books to me. Fourthly, the shopping agent acquired this relevant information in the “right way,” i.e., by means of reliable cognitive processes: its form-processing code was reasonably bug-free, carried out appropriate integrity checks on data without corrupting it, and transferred it to the back-end database scripts that populate its databases. This last condition ensures the shipping address was not stored in the agent’s data stores accidentally.

Update: On February 14-16, the Concurring Opinions blog will be conducting an online symposium on my book. I expect this analysis to be discussed there.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: