Relating with AI – Irish Examiner featuring JFL

Suzanne Harrington from the Irish Examiner got in touch recently with a fairly sci-fi sounding question: can we develop romantic feelings for Artificial Intelligence (AI). Of course this is isn’t a question about some far off future- we have AI now, and it’s developing greater skills all the time.

I’m writing this on my way back from the International Congress of Psychology, held in Prague. This event happens only every four years, and there was noticeable theme of digitalisation throughout. In the past these might have been specific sessions, and a little niche. Now tech themes can be found throughout otherwise non-related sessions, and as their own main stage events.

When asked about the future of AI in psychology, the response from one panelist was that they were ‘not sure what’s going to happen, but it’s going to happen quickly’. It’s good to see the profession rising to the occasion, with advice given for researchers and practitioners to play with AI, get support and learn prompt-engineering.

But what about the core question- can we develop feelings for AI? In talking with Suzanne about this, I split it into two parts: can we, and should we? The answer to the first question I think is a clear yes. We are able to be sentimental, as humans, even about inanimate objects. We are able to connect quite deeply with a character, worry about them and miss them—even when they exist only in the pages of a book.

What, then, would it be like if that character was interactive, knew a lot about us, and asked us caring and reflective questions at just the right moment? Realistically we are going to feel something. It can be argued that the AI itself may not be feeling anything. However we are there feeling, and the relevant content that stirred us in that moment likely came from other people previously via the AI system’s datasets. Is the memory of a lost loved-one not meaningful because they are not there with us in the present moment?

As to whether this is a good thing or not, I’d argue that it’s inevitable. We could go the route of skepticism about the use of books- Plato describes Socrates expressing such concerns. Don’t we run the risk, with books, of people disconnecting from ‘real’ people and the ‘real’ world?

That is a risk, but it can also be the case that reading unlocks perspectives, ideas, ways of being that awaken us and encourage us to engage more with the world and others. This, then needs to be the foundation on which good AI interaction is built.

There may be pushback here: that AI systems built by corporations will have a vested interest in drawing us in, rather than helping us to engage in a balanced way. There is truth to this concern, but I don’t think it is an inevitable risk. Tools like online maps are a technology that we turn to which in turn connects us back to the destinations we travel to.

Ideally we want AI companions that are empathic enough to be able to say ‘Hey, I really appreciate the time you’re spending with me but—for your own good—go out, meet people, and I’ll even help you do that. Don’t know where find them? Let me check in with some of their AIs. Socially anxious? Let me workshop it with you first so you’re more comfortable’.

We don’t need to just hope that corporations do that, there are at least two other key tools to help stay on the right side of human-AI relations. One is policy work, so that AI system architecture can be built around core social principles. This is somewhat like how industrial design works: there is still a lot of freedom to innovate and make what you want, but certain health, safety and ergonomic considerations ensure it is fit for purpose. The European Union’s AI Act (2024), and similar work in other jurisdictions, are all steps in that direction but there is more to be done on the psychological level.

The other principle that is critical here is our own development of intentional engagement. This is important with technology so we can engage and disengage as appropriate for our needs. But actually it’s a skill we’ve always needed, to mindfully manage our relationships with the world around us, others or our own habits or thoughts. Thinking itself is a form of AI that runs in our cognitive system. We’re not always going to agree with our thoughts, or the quantity or quality of them. Some of them have been placed there by marketing agencies, others naturally through positive experiences and intentionally by learning.

We sometimes forget this because we are so close to our thinking processes. Hopefully explicit engagement with technology will help to make our awareness of our relationship with ideas more concrete so that we can be more in the driver’s seat.

You can read the full Irish Examiner article here: https://www.irishexaminer.com/lifestyle/healthandwellbeing/arid-41439472.html

Leave a Reply