There is a lot of discussion about AI becoming conscious.
It's not 100% clear what the various researchers and pundits mean by that; I
would say the most important aspect of consciousness in the context of AI is
the notion of intent. This means that the AI becomes able to develop its own
goals and then optimise itself -- and the world around it, because it is so
intelligent and powerful -- to reach these goals. This leads to the challenge
of AI alignment (can we align the AI's goals with ours?) and the risks
associated with the various falours of the paperclip maximiser narrative.
Personally, I don't think we will get there anytime soon (by
which I don't want to imply that research towards AI alignment doesn't make
sense). However: I don't think that AI needs to be conscious -- intentful --
for it to potentially become a serious problem fo people and societies. Because
in some sense AIs will borrow the intents of humans. And these can be very
problematic. This perspective became clear to me over the last couple of
months, listening to and reading various experts (of which I am not one). Let
me explain by giving a bunch of examples.
The recommender algorithms used by all the well-known social
networks basically optimize for engagement. And engagement can be increased by
suggesting content that is sensationalising, radicalising, and polarising. This
drive towards "bad" content has clearly had (and still has) negative
consequences for societal cohesion, democratic deliberation and lots of
peoples' psychological health. These algorithms don't have intent. They didn't
optimizes themselves towards suggesting these kinds of contents. They
"made" their human overlords do that, because the humans wanted to
optimise profit. Same thing with the LLMs these days. Google felt pressured by
market and ego forces to publish their LLM because OpenAI published theirs
despite serious doubts about its readiness, inside Google and outside. The
problem also occurs on an intentional level. The US will never decide to slow
down AI deployment because "if we slow down, China will do it anyway".
Likely true.
So, to summarise: even an AI that does not have its own
intent will probably lead to lots of risks for our societies because the
intents of humans are likely to drive it in problematic directions. Powerful
AI, even unconscious and without intent, can be quite problematic at scale.
Two caveats. While I emphasise the negative consequences of
AI, it will also have lots of positive ones. I have not decided whether I think
the benefits will outweigh the the risks, or not.
Second, this dynamic is not restricted to AI. There are lots
of downward spirals driven by market and ego forces that also have bad
consequences for society, for example the (on average) decreasing quality of
journalism. However, AI makes everything so much faster and larger in scale
that it might become a much bigger problem than low-quality journalism.
And a final note: I do not think that this is capitalism's
fault. I think these dynamics are deeply in grained in humans -- which is why I
always wrote "market and ego forces".
Anyway. What do you think?