Making Intelligence Artificial

Artificial Intelligence (AI) refers to the capacity of a programmable system to produce output that is reminiscent of intelligent behavior, and to rapidly adapt with intelligent pattern-associations that can be parsed to intelligent learning (machine learning).

The technical proficiency of virtual reality systems is often measured against its ability to replicate the experiential reality in some way, by allowing the users to feel present in a virtual environment (VE) such that they forget about the mediation of the head-mounted display (HMD) and computers that produce the graphical interface they perceive to be (if only transiently) immediate and real. Now it seems that AI and VR are merging, and the result is a transmission of the world as cognized or perceived from one intelligent agent to another. Thus, it is not so much that we are creating artificially intelligent systems, but instead that we are translating our own intelligence into something that can be reproduced artificially.

This is what scares me. It also intrigues me. But it compels me to proceed with caution.

Take my Painter Project as an example.

Screen Shot 2016-08-27 at 10.13.25 PM
Subject Painting in Painter Project at Trailer Park IO, 29th July 2016.

Here, the user witnesses the painter’s experiential reality. The user sees the painter’s hand, and can place the tracked rendering (using Leap Motion) of their hand on top of hers to move with her on their own physical canvas. The painter is not present, and yet she guides the subject through an otherwise blind experience in a very intimate way that aligns the user arguably closer to her experience than their own. It is a strangely intimate, yet anonymous relationship between the user and the painter. And afterwards when subjects look at their blindly painted canvas, which is a highly abstract rendition of a face wherein the painter’s painting cannot easily be recognized, they often feel a sense of failure. “What did I really accomplish here?”, they might wonder. Feeling close to someone who they may never meet. It is not really a shared experience; it’s a one-sided communication.

Screen Shot 2016-08-27 at 10.52.14 PM
Subject’s paintings created through the Painter Project.

Many businesses and programmers have pushed me to develop the experience further using eye tracking, haptic feedback, motion sensors, tracked positioning of the user’s paintbrush on their canvas, allowing the user to paint on top of the painter’s painting, etc. The sense has been that this could be an arts education tool.

I never really wanted to be a virtual reality developer. My goals have always been to unite cognitive neuroscience and phenomenology in mutual, bidirectionally informed understanding. My interest in VR stems from its emphasis on experiences, and leveraging it as a tool for phenomenologists to enter into conversation with neuroscientists.

For several nights, something has been tormenting me, and I finally understand what it is.

If I can wire someone else up to be oriented to my experience in more or less the same way that I am, then that somehow makes my experience feel like it can be made into an artificial intelligence. And that feels weird. Specifically also, if I can make you see, hear, touch, and move in the same experiential mode as a painter while she paints, it seems to “wire you into” that experience, which has been encoded so thoroughly as to be programmable. It takes the expertise, or the “intelligence” out of the experience and makes it something that can be artificially transmitted through engineered systems.

I do not want that. My goal has always been to create conditions of possibility to allow for new social interactions that can allow people to connect in a new way for greater interpersonal understanding.
The painter project was designed to stimulate empathy and creativity. The goal was to orient the user towards the painter’s experience as to better learn about the creative process and to increase interpersonal understanding and communication. My thought was that a setup like this could take two people further into a conversation about their experience, and my results pointed positively in this direction. Experience is hard to talk about. I thought maybe users would come away from the experience feeling more creative, or more inspired to engage in creative processes; again, results were positive in this direction.

However, there is a difference between understanding what it is like to feel something versus having that feeling for oneself. Ultimately, I think my experiment is more about understanding and the transformed interaction that comes from that understanding.

This makes me want to go back to my original research proposal for the project, where I had intended to stage the project as a live stream from a painter to a participant wearing an HMD. Together, they would share a view of the painter’s sense reality, and they would move together. Sharing a POV in a live interaction while engaging in a task together is so beautiful to me, and it opens up a new space for interaction. The participant would again move along with or follow the movements of the painter; of course, there could be a slight delay but that would be okay. The painter could talk with the painter along the way, describing his or her process and creative imagination, and the participant could respond and interact. The interaction could result in an intersubjectively imagined painting something that never truly exists except between the minds of the painter and the participant.

Maybe somewhere there is a lab equipped with the technology to allow this to happen. Or I just need to wait another 5-10 years.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s