Electric Dreams, Tech

Electric Dreams, Tech
Here’s how strange 2050 will be
Ray Kurzweil has built a reputation for his accurate predictions around AI – and he sees some big events soon.
When it comes to artificial intelligence, no one has more experience than Ray Kurzweil. During his 61 years in the development of AI, he invented the first commercially available large-vocabulary speech recognition software, received the 1999 National Medal of Technology and Innovation from President Bill Clinton, and even was inducted into the National Inventors Hall of Fame in 2002.
In other words, he’s a big deal. But alongside his many practical achievements, Kurzweil also has a long track record predicting future technologies.
Most notably, in 2005, he published the book The Singularity is Near, which forecast the future of AI, addressing the rapid advancement of computers, and the way humans would become reliant on technology and AI. It was, let’s face it, worryingly accurate.
But what does the next 20 years have in store? Kurzweil is now answering just that question in follow-up book Singularity is Nearer, a collection of predictions in how AI will have an even greater influence on our lives.
We sat down with the man himself to unpack the biggest changes to expect.
One of Kurzweil’s key predictions revolves around Artificial General Intelligence (AGI). While many AIs now are designed for a specific task, an AGI would match or surpasses human capabilities across a wide range of cognitive tasks.
Currently, while impressive, AI models are only able to work in one area. ChatGPT is a text-based model, Dall-E or Midjourney for images, and companies like Wix are using AI for web design
Artificial General Intelligence is a hypothetical software that can do it all – it could learn and adapt to new skills and situations and understand human reason. Think Scarlett Johansson’s virtual assistant in 2013 films Her.
According to Kurzweil, an AGI capable of this will be available by 2029. “But that’s now starting to look like a conservative view,” he tells BBC Science Focus. “Other experts say it will be two years, maybe three.”
He adds: “When I first said this date in 1999, people found it alarming. Stanford organised a conference of 1000 people to discuss how realistic this was, and their view was that it would happen, but not within 30 years – the estimate was 100 years.”
Will an AI ever create lyrics that are emotionally meaningful to humans?
I’m also kind of sceptic on that one as well. AI can generate lyrics that are interesting and have an interesting narrative flow. But lyrics for songs are typically based on people’s life experiences, what’s happened to them. People write about falling in love, things that have gone wrong in their life or something like watching the sunrise in the morning. AIs don’t do that.
AI will master the art of being human
While 2029 will feel unrealistic to some, it follows the speed at which artificial intelligence has risen. Kurzweil points to the exponential gains seen within the technology, which are growing faster each year.
“Economists assume that the flow of these technologies is linear – it goes 1, 2, 3, 4. But really it is more like 1, 2, 4, 8. When something grows that quickly, advancements seem to start happening so suddenly one after the other,” says Kurzweil.
“Today’s computers can do half a trillion calculations per second. That would have been seen as an impossibility just 10 years ago.”
One of tech’s big talking points right now is a rather lofty aim – to defy ageing. While many have tried, Kurzweil believes we are nearing a point in time where we cannot just slow ageing but combat it entirely.
“We’re going to solve ageing in the next 5 to 10 years. Right now, as you get older, the probability of you dying the next year goes up. By the time you get to your 90s or older, there’s a very high chance of dying each year,” says Kurzweil.
“We’re going to overcome that. It’s already happening in medicine – we’re seeing AI rapidly speed up medicine and drug discovery. By the time we get to around 2029 to 2035, I firmly believe we will reach longevity escape velocity.”
A somewhat controversial concept, the longevity escape velocity suggests that people can live indefinitely by extending their remaining life expectancy faster than time passes. To achieve this, technology would need to advance to the point where it could actively achieve medical intervention equal to life lived – effectively gaining a year for every year lost.
If this sounds too good to be true, you’re not alone. This theory is hypothetical and faces a lot of blockades. Ageing is a highly complicated system, made up of a variety of factors within and outside the body.
However, people are already living longer than they did just a few years ago. With rapid expansions in health and technology, the average life expectancy is likely to increase.
“This doesn’t guarantee you’ll live forever. Any 20-year-old can have complications and could die tomorrow. However, the likeliness will decrease. Companies are making artificial lungs and kidneys, treatments are exponentially better and our understanding of diseases is improving,” says Kurzweil.
Not the Big Threat
AI without the capacity to think is more dangerous than AI with it. What threat could AI possibly pose without minds or agency? In asking this question, we forget that an impostor can be more dangerous than a competitor.
Human survival is not endangered by AI, at least not for reasons involving machine sentience. But extinction is not the only risk. Losing the humane capacities that make our mode of existence worth choosing and preserving is another.
Imagine that aliens landed tomorrow and offered us a choice:
Option A: They invade Earth and we take our chances resisting.
Option B: They leave the planet alone, but only after replacing us with doppelgangers that carry on all the usual human-like activities (eating, talking, working) with no capacity for independent thought or creative vision, no ability to break from the patterns of the past and no motives beyond the efficient replication of the existing order.
Is Option B the better choice? Or is it worse than the peril of extinction?
I’m not worried that today’s AIs will turn into these mindless doppelgangers. I’m worried that we will. We’re already willingly giving up the humane capacities that ChatGPT lacks.
Boosters of AI-powered writing apps are advertising, as a benefit, the chance to surrender the most important part of storytelling – envisioning where a story might go – to a bot that will simply present us with plausible preformed plot twists to choose from. People are lining up to thank the ‘innovators’ who show us how to train ChatGPT to write like we would, so that we may be liberated from the task of forming and articulating our thoughts.
The philosopher Hans Jonas warned us of the existential risk of a future ‘technopole’ that celebrates the “quenching of future spontaneity in a world of behavioural automata,” putting “the whole human enterprise at its mercy.” He didn’t make clear whether these automata will be machines or people. I suspect the ambiguity was intended.
Companies are now replacing scriptwriters, artists, lawyers and teachers – people who have crafted their talents over decades – with machines that produce output that’s ‘good enough’ to pass for the labours of our thinking. The replacement is worrying, but far more concerning is the increasingly common argument that thinking is work we should be happy to be rid of. As one Twitter user put it: what if the future is merely about humans asking the questions, and letting something else come up with the answers?
As machine-learning music specialist Prof Nick Bryan-Kinns explains, new neural networks can write original music – but may never compose meaningful lyrics.
Take a hike, Bieber. Step aside, Gaga. And watch out, Sheeran. Artificial intelligence is here and it’s coming for your jobs.
That’s, at least, what you might think after considering the ever-growing sophistication of AI-generated music.
While the concept of machine-composed music has been around since the 1800s (computing pioneer Ada Lovelace was one of the first to write about the topic), the fantasy has become reality in the past decade, with musicians such as Francois Pachet creating entire albums co-written by AI.
Some have even used AI to create ‘new’ music from the likes of Amy Winehouse, Mozart and Nirvana, feeding their back catalogue into a neural network.
AI becoming sentient is risky
What if ‘will AIs pose an existential threat if they become sentient?’ is the wrong question? What if the threat to humanity is not that today’s AIs become sentient, but the fact that they won’t?
The release of OpenAI’s ChatGPT has generated a flood of commentary, in the media and scientific circles, about the potential and risks of artificial intelligence (AI).
At its core, ChatGPT is a powerful version of the large language model known as GPT. GPT stands for generative pre-trained transformer: a type of machine learning model that extracts patterns from a vast body of training data (much of it scraped from the Internet) to generate new data composites (such as chunks of text) using the same patterns.
CEOs of AI companies, politicians and prominent AI researchers are now publicly sounding alarms about the potential of tools like GPT to pose an existential threat to humanity. Some claim that GPT may be the first ‘spark’ of artificial general intelligence, or AGI – an achievement predicted to entail the arrival of sentient, conscious machines whose supreme intellects will doom us to irrelevance.
But as many more sober AI experts have observed, there’s no scientific basis for the claim that large language models are, or ever will be, endowed with subjective experiences – the kind of ‘inner life’ that we speak of when we refer to conscious humans or other creatures for whom intelligence and sentience go hand in hand, such as dogs, elephants and octopi.
Everything we know about sentience is incompatible with a large language model, which lacks any coupling with the real world beyond our text inputs. Sentience requires the ability to sense and maintain contact with the multidimensional, spatiotemporally rich, flowing world around you, through sensorimotor organs and an embodied nervous system that’s coupled with the physical environment. Without this coupling to reality, there’s nothing to feel, nothing to be grasped, no reality to form as a stable subject within.
Does that mean AI is nothing to worry about? Some AI leaders, like Prof Yann LeCun, draw that conclusion. Their view is that today’s AIs are unlikely to lead to AGI, so they pose no grave threat to humanity. Unfortunately, these techno-optimists are also wrong. The threat is very much there. We’ve just fundamentally misunderstood (or, in some cases, perhaps wilfully misrepresented) its nature.
How easy is it to create AI music?
Even stranger, this July, countries across the world will even compete in the second annual ‘AI Song Contest’, a Eurovision-style competition in which all songs must be created with the help of artificial intelligence. (In case you’re wondering, the UK scooped more than null points in 2020, finishing in a respectable 6th place).
But will this technology ever truly become mainstream? Will artificial intelligence, as artist Grimes fears, soon “make musicians obsolete?”
To answer these questions and more, we sat down withProf Nick Bryan-Kinns, director of the Media and Arts Technology Centre at Queen Mary University of London. Below he
explains how AI music is composed, why this technology won’t crush humanity creativity – and how robots could soon become part of live performances.
Music AIs use neural networks that are large sets of bits of computers that try and mimic how the brain works. And you can basically throw lots of music at this neural network and it learns patterns – just like how the human brain does by repeatedly being shown things.
What’s tricky about today’s neural networks is they’re getting bigger. And they’re becoming harder for humans to understand what they’re doing.
We’re getting to a point now where we have these essentially black boxes that we put music into and nice new music comes out. But we don’t really understand the details of what it’s doing.
These neural networks also consume a lot of energy. If you’re trying to train AI to analyse the last 20 years of pop music, for instance, you’re chucking all that data in there and then using a lot of electricity to do the analysis and to generate a new song. At some point, we’re going to have to question whether the environmental impact is worth this new music.
I’m a sceptic on this. A computer may be able to make hundreds of tracks easily, but there is still likely still a human selecting which ones they think are nice or enjoyable.
There’s a little bit of smoke and mirrors going on with AI music now. You can throw in Amy Winehouse’s back catalogue into an AI and a load of music will come out. But somebody must go and edit that. They must decide which parts they like and which parts the AI needs to work on a bit more.
The problem is that we’re trying to train the AI to make music that we like, but we’re not allowing it to make music that it likes. Maybe the computer likes a different kind of music than we do. Maybe the future would just be all the AIs listening to music together without humans.
We will merge with AI in the near future
In both his books, Kurzweil refers to something known as ‘The Singularity’. A term borrowed from physics; the singularity refers to a hypothetical future point in time where technological growth becomes both uncontrollable and irreversible.
Like many of his other predictions, Kurzweil puts a date on this: 2045. “This will be the singularity – where we no longer have control of AI. In physics, the term singularity means something so powerful that it exceeds our understanding so much that we can’t even imagine what will happen,” he says.
The so-called ‘Singularity’ is delved into deeply in both of his books, but there are a few key parts of the idea. The most noticeable of which is his belief that we will ‘merge’ with AI, creating a new form of intelligence.
This would mean a dramatically enhanced level of human intelligence, allowing us to overcome limitations. He also believes that nanotechnology will play a crucial role in connecting the human brain with computers, creating a seamless interface between the two.
Like many of the topics he touches on, Kurzweil’s beliefs around the singularity aren’t without criticism. It relies on the continued rapid advancement of technology and would require a far better understanding of both intelligence and human’s ability to merge with technology.
One thing is for sure, Kurzweil is excited about the future, ready to defy ageing and merge with AI by 2050.
Ray Kurzweil is an inventor, author and futurist. He has worked in the field of AI for 61 years. In 2005, he published the book The Singularity is Near. This addressed the future of AI, establishing a variety of commonly held believes about AI today. He followed this book up in 2024 with the book The Singularity is Nearer





