Virtually resurrecting the dead...
A few months back, there was news of a Google engineer declaring that the AI model it was interacting was conscious because the conversations with it were so amazingly realistic.
Experts quickly clarified that it was just mimicking human responses and wasn't capable of consciousness. But this and other emerging capabilities of LLMs reminded me of an idea in the brilliant sci-fi novel by Alastair Reynolds - "Revelation Space"
It imagines a future roughly 500 years from now where it's common for the rich to have two possible digital formats of their own selves. The cheaper version was called "beta" - a computer program perfectly capable of mimicking the original's behaviour, mannerisms and thought-patterns - but was NOT conscious.
Then there was the "alpha" - the most sophisticated digital version of a person's neural architecture that was, in addition to everyone "beta" was, self-aware and hence, protected under existing laws. It was illegal to have an alpha version of yourself while your physical self was still alive.
An "alpha" version is still several decades (centuries?) in the future, but we might be tentatively close to a nascent form of "beta" simulation.
We already have AI that can write sonnets and music imitating the style and voice of great artists. Now imagine if the same AI had access to 10 years of every word we speak in our daily lives. This would be possible by something as little as giving an app permission to record everything we say (or doing so anyway, illegally).
From this treasure trove of personalized information, it would be fully capable of mimicking our speech patterns, typical responses and emotional outbursts, making a user feel as if they're interacting with the original person, almost to the last detail.
Add to that the technology that drives DeepFakes, you'll have a fully realistic video chat with a person that might not even be alive today!
Perhaps the technology would find use in assisting those who have lost someone close, and help them achieve some sort of closure. Or it might exploit the same people and psychologically manipulate them at their weakest moment. Perhaps both will happen.
And then, there are frauds. Let's say you get a video call from a profile with your friend's photo, and she's panting and in tears. Would you be able to resist coming to her aid, physically or through money? How many millions of such scams would occur before technology (and awareness) evolves to detect these frauds?
On the upside, it would be pretty awesome to have a one-on-one video chat with "Shahrukh Khan" 😎
... Or one with your own self!! Now, that would be something unearthly 😄