In science fiction space operas, there's a common conundrum whether you should construct and send spaceships to nearby stars using current technology, or to wait for a decade or two when you can take advantage of (possible but not inevitable) technological progress that can cut the journey time by a hundred years or so.
Over the last one year, I've been distressed by something similar. If you take the concept of AGI to its natural conclusion, anything decent I attempt in life will be done better and faster by machines. Whether that happens in 2 years or 10 is immaterial - it WILL happen.
Given that, is it worth investing so much time and energy in such endeavours? And if you think that your particular ambition will remain unaffected, then you haven't understood what AGI really means, or what will inevitably follow. I envy you.
A trivial example of the despondency I feel is that I can copy-paste this very note into the primitive AI chatbots of today, and get them to output a 2x better version of it in seconds! I can ask them to be make it more passionate, more witty, or even more Gen-Z-ish. All the effort I put in feels redundant. And it's the worse this technology will ever be.
It true AGI is 10 years into the future (less likely), there might still be some payoff for our efforts in certain projects, but our long-term destiny is still sealed. Short of working to create a safe AGI, there are very few works that are truly worth doing right now; humanitarian tasks being a strong exception.
We like to talk and fret over politics and geopolitics, over economy, and everything else in between. But we unconsciously overlook the rapidly approaching inflection point where AGI will fundamentally transform these things. It's like talking of life inside a black hole by extrapolating from the way things work outside it. How useful is that?
Again, if you think otherwise, I urge you to seriously ponder over the meaning of a true AGI, or give a good reason we won't achieve it in the near future.
Assuming we're not running for our lives in the post-AGI world, the questions of the worth of human labour (intellectual or otherwise) and the purpose of human existence will move from Dostoevsky enthusiasts to the mainstream.
And I fear the answers we get might not be to our liking
These are great points. Would love to have a conversation at some point about my theory regarding the purpose of AI and how it plays into global politics and the survival of the planet and humans if you're interested!